bup-0.29/000077500000000000000000000000001303127641400122335ustar00rootroot00000000000000bup-0.29/.dir-locals.el000066400000000000000000000004331303127641400146640ustar00rootroot00000000000000((nil . ()) (python-mode . ((indent-tabs-mode . nil) (python-indent-offset . 4))) (sh-mode . ((indent-tabs-mode . nil) (sh-basic-offset . 4))) (c-mode . ((indent-tabs-mode . nil) (c-basic-offset . 4) (c-file-style . "BSD")))) bup-0.29/.github/000077500000000000000000000000001303127641400135735ustar00rootroot00000000000000bup-0.29/.github/CONTRIBUTING.md000066400000000000000000000000741303127641400160250ustar00rootroot00000000000000 Please see https://github.com/bup/bup/blob/master/HACKING bup-0.29/.github/PULL_REQUEST_TEMPLATE000066400000000000000000000015721303127641400170020ustar00rootroot00000000000000 We discuss code changes on the mailing list bup-list@googlegroups.com, but if you'd prefer to begin the process with a pull request, that's just fine. We're happy to have the help. In any case, please make sure each commit includes a Signed-off-by: Someone line in the commit message that matches the "Author" line so that we'll be able to include your work in the project. See ./SIGNED-OFF-BY for the meaning: https://github.com/bup/bup/blob/master/SIGNED-OFF-BY After you submit the pull request, someone will eventually redirect it to the list for review, and you will of course be included in the conversation there. On the other hand, if you're comfortable with "git send-email" (or the equivalent), please post your patches to the list as described in the "Submitting Patches" section in ./HACKING: https://github.com/bup/bup/blob/master/HACKING bup-0.29/.gitignore000066400000000000000000000002261303127641400142230ustar00rootroot00000000000000/bup /cmd/bup-* /cmd/python-cmd.sh randomgen memtest *.o *.so *.exe *.dll *~ *.pyc *.tmp *.tmp.meta /build *.swp nbproject /t/sampledata/var/ /t/tmp/ bup-0.29/CODINGSTYLE000066400000000000000000000015701303127641400137450ustar00rootroot00000000000000Python code follows PEP8 [1] with regard to coding style and PEP257 [2] with regard to docstring style. Multi-line docstrings should have one short summary line, followed by a blank line and a series of paragraphs. The last paragraph should be followed by a line that closes the docstring (no blank line in between). Here's an example from lib/bup/helpers.py: def unlink(f): """Delete a file at path 'f' if it currently exists. Unlike os.unlink(), does not throw an exception if the file didn't already exist. """ #code... Module-level docstrings follow exactly the same guidelines but without the blank line between the summary and the details. The C implementations should follow the kernel/git coding style [3]. [1]:http://www.python.org/dev/peps/pep-0008/ [2]:http://www.python.org/dev/peps/pep-0257/ [3]:http://www.kernel.org/doc/Documentation/CodingStyle bup-0.29/DESIGN000066400000000000000000001012271303127641400131320ustar00rootroot00000000000000 The Crazy Hacker's Crazy Guide to Bup Craziness =============================================== Despite what you might have heard, bup is not that crazy, and neither are you if you're trying to figure out how it works. But it's also (as of this writing) rather new and the source code doesn't have a lot of comments, so it can be a little confusing at first glance. This document is designed to make it easier for you to get started if you want to add a new feature, fix a bug, or just understand how it all works. Bup Source Code Layout ---------------------- As you're reading this, you might want to look at different parts of the bup source code to follow along and see what we're talking about. bup's code is written primarily in python with a bit of C code in speed-sensitive places. Here are the most important things to know: - bup (symlinked to main.py) is the main program that runs when you type 'bup'. - cmd/bup-* (mostly symlinked to cmd/*-cmd.py) are the individual subcommands, in a way similar to how git breaks all its subcommands into separate programs. Not all the programs have to be written in python; they could be in any language, as long as they end up named cmd/bup-*. We might end up re-coding large parts of bup in C eventually so that it can be even faster and (perhaps) more portable. - lib/bup/*.py are python library files used by the cmd/*.py commands. That directory name seems a little silly (and worse, redundant) but there seemed to be no better way to let programs write "from bup import index" and have it work. Putting bup in the top level conflicted with the 'bup' command; calling it anything other than 'bup' was fundamentally wrong, and doesn't work when you install bup on your system in /usr/lib somewhere. So we get the annoyingly long paths. Repository Structure ==================== Before you can talk about how bup works, we need to first address what it does. The purpose of bup is essentially to let you "replicate" data between two main data structures: 1. Your computer's filesystem; 2. A bup repository. (Yes, we know, that part also resides in your filesystem. Stop trying to confuse yourself. Don't worry, we'll be plenty confusing enough as it is.) Essentially, copying data from the filesystem to your repository is called "backing stuff up," which is what bup specializes in. Normally you initiate a backup using the 'bup save' command, but that's getting ahead of ourselves. For the inverse operation, ie. copying from the repository to your filesystem, you have several choices; the main ones are 'bup restore', 'bup ftp', 'bup fuse', and 'bup web'. Now, those are the basics of backups. In other words, we just spent about half a page telling you that bup backs up and restores data. Are we having fun yet? The next thing you'll want to know is the format of the bup repository, because hacking on bup is rather impossible unless you understand that part. In short, a bup repository is a git repository. If you don't know about git, you'll want to read about it now. A really good article to read is "Git for Computer Scientists" - you can find it in Google. Go read it now. We'll wait. Got it? Okay, so now you're an expert in blobs, trees, commits, and refs, the four building blocks of a git repository. bup uses these four things, and they're formatted in exactly the same way as git does it, so you can use git to manipulate the bup repository if you want, and you probably won't break anything. It's also a comfort to know you can squeeze data out using git, just in case bup fails you, and as a developer, git offers some nice tools (like 'git rev-list' and 'git log' and 'git diff' and 'git show' and so on) that allow you to explore your repository and help debug when things go wrong. Now, bup does use these tools a little bit differently than plain git. We need to do this in order to address two deficiencies in git when used for large backups, namely a) git bogs down and crashes if you give it really large files; b) git is too slow when you give it too many files; and c) git doesn't store detailed filesystem metadata. Let's talk about each of those problems in turn. Handling large files (cmd/split, hashsplit.split_to_blob_or_tree) -------------------- The primary reason git can't handle huge files is that it runs them through xdelta, which generally means it tries to load the entire contents of a file into memory at once. If it didn't do this, it would have to store the entire contents of every single revision of every single file, even if you only changed a few bytes of that file. That would be a terribly inefficient use of disk space, and git is well known for its amazingly efficient repository format. Unfortunately, xdelta works great for small files and gets amazingly slow and memory-hungry for large files. For git's main purpose, ie. managing your source code, this isn't a problem. But when backing up your filesystem, you're going to have at least a few large files, and so it's a non-starter. bup has to do something totally different. What bup does instead of xdelta is what we call "hashsplitting." We wanted a general-purpose way to efficiently back up *any* large file that might change in small ways, without storing the entire file every time. In fact, the original versions of bup could only store a single file at a time; surprisingly enough, this was enough to give us a large part of bup's functionality. If you just take your entire filesystem and put it in a giant tarball each day, then send that tarball to bup, bup will be able to efficiently store only the changes to that tarball from one day to the next. For small files, bup's compression won't be as good as xdelta's, but for anything over a few megabytes in size, bup's compression will actually *work*, which is a big advantage over xdelta. How does hashsplitting work? It's deceptively simple. We read through the file one byte at a time, calculating a rolling checksum of the last 64 bytes. (Why 64? No reason. Literally. We picked it out of the air. Probably some other number is better. Feel free to join the mailing list and tell us which one and why.) (The rolling checksum idea is actually stolen from rsync and xdelta, although we use it differently. And they use some kind of variable window size based on a formula we don't totally understand.) The original rolling checksum algorithm we used was called "stupidsum," because it was based on the only checksum Avery remembered how to calculate at the time. He also remembered that it was the introductory checksum algorithm in a whole article about how to make good checksums that he read about 15 years ago, and it was thoroughly discredited in that article for being very stupid. But, as so often happens, Avery couldn't remember any better algorithms from the article. So what we got is stupidsum. Since then, we have replaced the stupidsum algorithm with what we call "rollsum," based on code in librsync. It's essentially the same as what rsync does, except we use a fixed window size. (If you're a computer scientist and can demonstrate that some other rolling checksum would be faster and/or better and/or have fewer screwy edge cases, we need your help! Avery's out of control! Join our mailing list! Please! Save us! ... oh boy, I sure hope he doesn't read this) In any case, rollsum seems to do pretty well at its job. You can find it in bupsplit.c. Basically, it converts the last 64 bytes read into a 32-bit integer. What we then do is take the lowest 13 bits of the rollsum, and if they're all 1's, we consider that to be the end of a chunk. This happens on average once every 2^13 = 8192 bytes, so the average chunk size is 8192 bytes. (Why 13 bits? Well, we picked the number at random and... eugh. You're getting the idea, right? Join the mailing list and tell us why we're wrong.) (Incidentally, even though the average chunk size is 8192 bytes, the actual probability distribution of block sizes ends up being non-uniform; if we remember our stats classes correctly, which we probably don't, it's probably an "exponential distribution." The idea is that for each byte in the block, the probability that it's the last block is one in 8192. Thus, the block sizes end up being skewed toward the smaller end. That's not necessarily for the best, but maybe it is. Computer science to the rescue? You know the drill.) Anyway, so we're dividing up those files into chunks based on the rolling checksum. Then we store each chunk separately (indexed by its sha1sum) as a git blob. Why do we split this way? Well, because the results are actually really nice. Let's imagine you have a big mysql database dump (produced by mysqldump) and it's basically 100 megs of SQL text. Tomorrow's database dump adds 100 rows to the middle of the file somewhere, soo it's 100.01 megs of text. A naive block splitting algorithm - for example, just dividing the file into 8192-byte blocks - would be a disaster. After the first bit of text has changed, every block after that would have a different boundary, so most of the blocks in the new backup would be different from the previous ones, and you'd have to store the same data all over again. But with hashsplitting, no matter how much data you add, modify, or remove in the middle of the file, all the chunks *before* and *after* the affected chunk are absolutely the same. All that matters to the hashsplitting algorithm is the 32-byte "separator" sequence, and a single change can only affect, at most, one separator sequence or the bytes between two separator sequences. And because of rollsum, about one in 8192 possible 64-byte sequences is a separator sequence. Like magic, the hashsplit chunking algorithm will chunk your file the same way every time, even without knowing how it had chunked it previously. The next problem is less obvious: after you store your series of chunks as git blobs, how do you store their sequence? Each blob has a 20-byte sha1 identifier, which means the simple list of blobs is going to be 20/8192 = 0.25% of the file length. For a 200GB file, that's 488 megs of just sequence data. As an overhead percentage, 0.25% basically doesn't matter. 488 megs sounds like a lot, but compared to the 200GB you have to store anyway, it's irrelevant. What *is* relevant is that 488 megs is a lot of memory you have to use in order to to keep track of the list. Worse, if you back up an almost-identical file tomorrow, you'll have *another* 488 meg blob to keep track of, and it'll be almost but not quite the same as last time. Hmm, big files, each one almost the same as the last... you know where this is going, right? Actually no! Ha! We didn't split this list in the same way. We could have, in fact, but it wouldn't have been very "git-like", since we'd like to store the list as a git 'tree' object in order to make sure git's refcounting and reachability analysis doesn't get confused. Never mind the fact that we want you to be able to 'git checkout' your data without any special tools. What we do instead is we extend the hashsplit algorithm a little further using what we call "fanout." Instead of checking just the last 13 bits of the checksum, we use additional checksum bits to produce additional splits. For example, let's say we use a 4-bit fanout. That means we'll break a series of chunks into its own tree object whenever the last 13+4 = 17 bits of the rolling checksum are 1. Naturally, whenever the lowest 17 bits are 1, the lowest 13 bits are *also* 1, so the boundary of a chunk group is always also the boundary of a particular chunk. And so on. Eventually you'll have too many chunk groups, but you can group them into supergroups by using another 4 bits, and continue from there. What you end up with is an actual tree of blobs - which git 'tree' objects are ideal to represent. And if you think about it, just like the original list of chunks, the tree itself is pretty stable across file modifications. Any one modification will only affect the chunks actually containing the modifications, thus only the groups containing those chunks, and so on up the tree. Essentially, the number of changed git objects is O(log n) where n is the number of chunks. Since log 200 GB, using a base of 16 or so, is not a very big number, this is pretty awesome. Remember, any git object we *don't* change in a new backup is one we can reuse from last time, so the deduplication effect is pretty awesome. Better still, the hashsplit-tree format is good for a) random instead of sequential access to data (which you can see in action with 'bup fuse'); and b) quickly showing the differences between huge files (which we haven't really implemented because we don't need it, but you can try 'git diff -M -C -C backup1 backup2 -- filename' for a good start). So now we've split out 200 GB file into about 24 million pieces. That brings us to git limitation number 2. Handling huge numbers of files (git.PackWriter) ------------------------------ git is designed for handling reasonably-sized repositories that change relatively infrequently. (You might think you change your source code "frequently" and that git handles much more frequent changes than, say, svn can handle. But that's not the same kind of "frequently" we're talking about. Imagine you're backing up all the files on your disk, and one of those files is a 100 GB database file with hundreds of daily users. Your disk changes so frequently you can't even back up all the revisions even if you were backing stuff up 24 hours a day. That's "frequently.") git's way of doing things works really nicely for the way software developers write software, but it doesn't really work so well for everything else. The #1 killer is the way it adds new objects to the repository: it creates one file per blob. Then you later run 'git gc' and combine those files into a single file (using highly efficient xdelta compression, and ignoring any files that are no longer relevant). 'git gc' is slow, but for source code repositories, the resulting super-efficient storage (and associated really fast access to the stored files) is worth it. For backups, it's not; you almost never access your backed-up data, so storage time is paramount, and retrieval time is mostly unimportant. To back up that 200 GB file with git and hashsplitting, you'd have to create 24 million little 8k files, then copy them into a 200 GB packfile, then delete the 24 million files again. That would take about 400 GB of disk space to run, require lots of random disk seeks, and require you to go through your data twice. So bup doesn't do that. It just writes packfiles directly. Luckily, these packfiles are still git-formatted, so git can happily access them once they're written. But that leads us to our next problem. Huge numbers of huge packfiles (midx.py, bloom.py, cmd/midx, cmd/bloom) ------------------------------ Git isn't actually designed to handle super-huge repositories. Most git repositories are small enough that it's reasonable to merge them all into a single packfile, which 'git gc' usually does eventually. The problematic part of large packfiles isn't the packfiles themselves - git is designed to expect the total size of all packs to be larger than available memory, and once it can handle that, it can handle virtually any amount of data about equally efficiently. The problem is the packfile indexes (.idx) files. In bup we call these idx (pronounced "idix") files instead of using the word "index," because the word index is already used for something totally different in git (and thus bup) and we'll become hopelessly confused otherwise. Anyway, each packfile (*.pack) in git has an associated idx (*.idx) that's a sorted list of git object hashes and file offsets. If you're looking for a particular object based on its sha1, you open the idx, binary search it to find the right hash, then take the associated file offset, seek to that offset in the packfile, and read the object contents. The performance of the binary search is about O(log n) with the number of hashes in the pack, with an optimized first step (you can read about it elsewhere) that somewhat improves it to O(log(n)-7). Unfortunately, this breaks down a bit when you have *lots* of packs. Say you have 24 million objects (containing around 200 GB of data) spread across 200 packfiles of 1GB each. To look for an object requires you search through about 122000 objects per pack; ceil(log2(122000)-7) = 10, so you'll have to search 10 times. About 7 of those searches will be confined to a single 4k memory page, so you'll probably have to page in about 3-4 pages per file, times 200 files, which makes 600-800 4k pages (2.4-3.6 megs)... every single time you want to look for an object. This brings us to another difference between git's and bup's normal use case. With git, there's a simple optimization possible here: when looking for an object, always search the packfiles in MRU (most recently used) order. Related objects are usually clusted together in a single pack, so you'll usually end up searching around 3 pages instead of 600, which is a tremendous improvement. (And since you'll quickly end up swapping in all the pages in a particular idx file this way, it isn't long before searching for a nearby object doesn't involve any swapping at all.) bup isn't so lucky. git users spend most of their time examining existing objects (looking at logs, generating diffs, checking out branches), which lends itself to the above optimization. bup, on the other hand, spends most of its time looking for *nonexistent* objects in the repository so that it can back them up. When you're looking for objects that aren't in the repository, there's no good way to optimize; you have to exhaustively check all the packs, one by one, to ensure that none of them contain the data you want. To improve performance of this sort of operation, bup introduces midx (pronounced "midix" and short for "multi-idx") files. As the name implies, they index multiple packs at a time. Imagine you had a midx file for your 200 packs. midx files are a lot like idx files; they have a lookup table at the beginning that narrows down the initial search, followed by a binary search. Then unlike idx files (which have a fixed-size 256-entry lookup table) midx tables have a variably-sized table that makes sure the entire binary search can be contained to a single page of the midx file. Basically, the lookup table tells you which page to load, and then you binary search inside that page. A typical search thus only requires the kernel to swap in two pages, which is better than results with even a single large idx file. And if you have lots of RAM, eventually the midx lookup table (at least) will end up cached in memory, so only a single page should be needed for each lookup. You generate midx files with 'bup midx'. The downside of midx files is that generating one takes a while, and you have to regenerate it every time you add a few packs. UPDATE: Brandon Low contributed an implementation of "bloom filters", which have even better characteristics than midx for certain uses. Look it up in Wikipedia. He also massively sped up both midx and bloom by rewriting the key parts in C. The nicest thing about bloom filters is we can update them incrementally every time we get a new idx, without regenerating from scratch. That makes the update phase much faster, and means we can also get away with generating midxes less often. midx files are a bup-specific optimization and git doesn't know what to do with them. However, since they're stored as separate files, they don't interfere with git's ability to read the repository. Detailed Metadata ----------------- So that's the basic structure of a bup repository, which is also a git repository. There's just one more thing we have to deal with: filesystem metadata. Git repositories are really only intended to store file contents with a small bit of extra information, like symlink targets and and executable bits, so we have to store the rest some other way. Bup stores more complete metadata in the VFS in a file named .bupm in each tree. This file contains one entry for each file in the tree object, sorted in the same order as the tree. The first .bupm entry is for the directory itself, i.e. ".", and its name is the empty string, "". Each .bupm entry contains a variable length sequence of records containing the metadata for the corresponding path. Each record records one type of metadata. Current types include a common record type (containing the normal stat information), a symlink target type, a hardlink target type, a POSIX1e ACL type, etc. See metadata.py for the complete list. The .bupm file is optional, and when it's missing, bup will behave as it did before the addition of metadata, and restore files using the tree information. The nice thing about this design is that you can walk through each file in a tree just by opening the tree and the .bupm contents, and iterating through both at the same time. Since the contents of any .bupm file should match the state of the filesystem when it was *indexed*, bup must record the detailed metadata in the index. To do this, bup records four values in the index, the atime, mtime, and ctime (as timespecs), and an integer offset into a secondary "metadata store" which has the same name as the index, but with ".meta" appended. This secondary store contains the encoded Metadata object corresponding to each path in the index. Currently, in order to decrease the storage required for the metadata store, bup only writes unique values there, reusing offsets when appropriate across the index. The effectiveness of this approach relies on the expectation that there will be many duplicate metadata records. Storing the full timestamps in the index is intended to make that more likely, because it makes it unnecessary to record those values in the secondary store. So bup clears them before encoding the Metadata objects destined for the index, and timestamp differences don't contribute to the uniqueness of the metadata. Bup supports recording and restoring hardlinks, and it does so by tracking sets of paths that correspond to the same dev/inode pair when indexing. This information is stored in an optional file with the same name as the index, but ending with ".hlink". If there are multiple index runs, and the hardlinks change, bup will notice this (within whatever subtree it is asked to reindex) and update the .hlink information accordingly. The current hardlink implementation will refuse to link to any file that resides outside the restore tree, and if the restore tree spans a different set of filesystems than the save tree, complete sets of hardlinks may not be restored. Filesystem Interaction ====================== Storing data is just half of the problem of making a backup; figuring out what to store is the other half. At the most basic level, piping the output of 'tar' into 'bup split' is an easy way to offload that decision; just let tar do all the hard stuff. And if you like tar files, that's a perfectly acceptable way to do it. But we can do better. Backing up with tarballs would totally be the way to go, except for two serious problems: 1. The result isn't easily "seekable." Tar files have no index, so if (as commonly happens) you only want to restore one file in a 200 GB backup, you'll have to read up to 200 GB before you can get to the beginning of that file. tar is short for "tape archive"; on a tape, there was no better way to do it anyway, so they didn't try. But on a disk, random file access is much, much better when you can figure out how. 2. tar doesn't remember which files it backed up last time, so it has to read through the entire file contents again in order to generate the tarball, large parts of which will then be skipped by bup since they've already been stored. This is much slower than necessary. (The second point isn't entirely true for all versions of tar. For example, GNU tar has an "incremental" mode that can somewhat mitigate this problem, if you're smart enough to know how to use it without hurting yourself. But you still have to decide which backups are "incremental" and which ones will be "full" and so on, so even when it works, it's more error-prone than bup.) bup divides the backup process into two major steps: a) indexing the filesystem, and b) saving file contents into the repository. Let's look at those steps in detail. Indexing the filesystem (cmd/drecurse, cmd/index, index.py) ----------------------- Splitting the filesystem indexing phase into its own program is nontraditional, but it gives us several advantages. The first advantage is trivial, but might be the most important: you can index files a lot faster than you can back them up. That means we can generate the index (.bup/bupindex) first, then have a nice, reliable, non-lying completion bar that tells you how much of your filesystem remains to be backed up. The alternative would be annoying failures like counting the number of *files* remaining (as rsync does), even though one of the files is a virtual machine image of 80 GB, and the 1000 other files are each under 10k. With bup, the percentage complete is the *real* percentage complete, which is very pleasant. Secondly, it makes it easier to debug and test; you can play with the index without actually backing up any files. Thirdly, you can replace the 'bup index' command with something else and not have to change anything about the 'bup save' command. The current 'bup index' implementation just blindly walks the whole filesystem looking for files that have changed since the last time it was indexed; this works fine, but something using inotify instead would be orders of magnitude faster. Windows and MacOS both have inotify-like services too, but they're totally different; if we want to support them, we can simply write new bup commands that do the job, and they'll never interfere with each other. And fourthly, git does it that way, and git is awesome, so who are we to argue? So let's look at how the index file works. First of all, note that the ".bup/bupindex" file is not the same as git's ".git/index" file. The latter isn't used in bup; as far as git is concerned, your bup repository is a "bare" git repository and doesn't have a working tree, and thus it doesn't have an index either. However, the bupindex file actually serves exactly the same purpose as git's index file, which is why we still call it "the index." We just had to redesign it for the usual bup-vs-git reasons, mostly that git just isn't designed to handle millions of files in a single repository. (The only way to find a file in git's index is to search it linearly; that's very fast in git-sized repositories, but very slow in bup-sized ones.) Let's not worry about the exact format of the bupindex file; it's still not optimal, and will probably change again. The most important things to know about bupindex are: - You can iterate through it much faster than you can iterate through the "real" filesystem (using something like the 'find' command). - If you delete it, you can get it back just by reindexing your filesystem (although that can be annoying to wait for); it's not critical to the repository itself. - You can iterate through only particular subtrees if you want. - There is no need to have more than one index for a particular filesystem, since it doesn't store anything about backups; it just stores file metadata. It's really just a cache (or 'index') of your filesystem's existing metadata. You could share the bupindex between repositories, or between multiple users on the same computer. If you back up your filesystem to multiple remote repositories to be extra safe, you can still use the same bupindex file across all of them, because it's the same filesystem every time. - Filenames in the bupindex are absolute paths, because that's the best way to ensure that you only need one bupindex file and that they're interchangeable. A note on file "dirtiness" -------------------------- The concept on which 'bup save' operates is simple enough; it reads through the index and backs up any file that is "dirty," that is, doesn't already exist in the repository. Determination of dirtiness is a little more complicated than it sounds. The most dirtiness-relevant relevant flag in the bupindex is IX_HASHVALID; if this flag is reset, the file *definitely* is dirty and needs to be backed up. But a file may be dirty even if IX_HASHVALID is set, and that's the confusing part. The index stores a listing of files, their attributes, and their git object ids (sha1 hashes), if known. The "if known" is what IX_HASHVALID is about. When 'bup save' backs up a file, it sets the sha1 and sets IX_HASHVALID; when 'bup index' sees that a file has changed, it leaves the sha1 alone and resets IX_HASHVALID. Remember that the index can be shared between users, repositories, and backups. So IX_HASHVALID doesn't mean your repository *has* that sha1 in it; it only means that if you *do* have it, that you don't need to back up the file. Thus, 'bup save' needs to check every file in the index to make sure its hash exists, not just that it's valid. There's an optimization possible, however: if you know a particular tree's hash is valid and exists (say /usr), then you don't need to check the validity of all its children; because of the way git trees and blobs work, if your repository is valid and you have a tree object, then you have all the blobs it points to. You won't back up a tree object without backing up its blobs first, so you don't need to double check it next time. (If you really want to double check this, it belongs in a tool like 'bup fsck' or 'git fsck'.) So in short, 'bup save' on a "clean" index (all files are marked IX_HASHVALID) can be very fast; we just check our repository and see if the top level IX_HASHVALID sha1 exists. If it does, then we're done. Similarly, if not the entire index is valid, you can still avoid recursing into subtrees if those particular subtrees are IX_HASHVALID and their sha1s are in the repository. The net result is that, as long as you never lose your index, 'bup save' can always run very fast. Another interesting trick is that you can skip backing up files even if IX_HASHVALID *isn't* set, as long as you have that file's sha1 in the repository. What that means is you've chosen not to backup the latest version of that file; instead, your new backup set just contains the most-recently-known valid version of that file. This is a good trick if you want to do frequent backups of smallish files and infrequent backups of large ones (as in 'bup save --smaller'). Each of your backups will be "complete," in that they contain all the small files and the large ones, but intermediate ones will just contain out-of-date copies of the large files. A final game we can play with the bupindex involves restoring: when you restore a directory from a previous backup, you can update the bupindex right away. Then, if you want to restore a different backup on top, you can compare the files in the index against the ones in the backup set, and update only the ones that have changed. (Even more interesting things happen if people are using the files on the restored system and you haven't updated the index yet; the net result would be an automated merge of all non-conflicting files.) This would be a poor man's distributed filesystem. The only catch is that nobody has written this feature for 'bup restore' yet. Someday! How 'bup save' works (cmd/save) -------------------- This section is too boring and has been omitted. Once you understand the index, there's nothing special about bup save. Retrieving backups: the bup vfs layer (vfs.py, cmd/ls, cmd/ftp, cmd/fuse) ===================================== One of the neat things about bup's storage format, at least compared to most backup tools, is it's easy to read a particular file, or even part of a file. That means a read-only virtual filesystem is easy to generate and it'll have good performance characteristics. Because of git's commit structure, you could even use branching and merging to make a transactional read-write filesystem... but that's probably getting a little out of bup's scope. Who knows what the future might bring, though? Read-only filesystems are well within our reach today, however. The 'bup ls', 'bup ftp', and 'bup fuse' commands all use a "VFS" (virtual filesystem) layer to let you access your repositories. Feel free to explore the source code for these tools and vfs.py - they're pretty straightforward. Some things to note: - None of these use the bupindex for anything. - For user-friendliness, they present your refs/commits/trees as a single hierarchy (ie. a filesystem), which isn't really how git repositories are formatted. So don't get confused! We hope you'll enjoy bup. Looking forward to your patches! -- apenwarr and the rest of the bup team Local Variables: mode: text End: bup-0.29/Documentation/000077500000000000000000000000001303127641400150445ustar00rootroot00000000000000bup-0.29/Documentation/.gitignore000066400000000000000000000000321303127641400170270ustar00rootroot00000000000000*.[0-9] *.html /substvars bup-0.29/Documentation/bup-bloom.md000066400000000000000000000027351303127641400172710ustar00rootroot00000000000000% bup-bloom(1) Bup %BUP_VERSION% % Brandon Low % %BUP_DATE% # NAME bup-bloom - generates, regenerates, updates bloom filters # SYNOPSIS bup bloom [-d dir] [-o outfile] [-k hashes] [-c idxfile] [-f] [\--ruin] # DESCRIPTION `bup bloom` builds a bloom filter file for a bup repository. If one already exists, it checks the filter and updates or regenerates it as needed. # OPTIONS \--ruin : destroy bloom filters by setting the whole bitmask to zeros. you really want to know what you are doing if run this and you want to delete the resulting bloom when you are done with it. -f, \--force : don't update the existing bloom file; generate a new one from scratch. -d, \--dir=*directory* : the directory, containing `.idx` files, to process. Defaults to $BUP_DIR/objects/pack -o, \--outfile=*outfile* : the file to write the bloom filter to. defaults to $dir/bup.bloom -k, \--hashes=*hashes* : number of hash functions to use only 4 and 5 are valid. defaults to 5 for repositories < 2 TiB, or 4 otherwise. See comments in git.py for more on this value. -c, \--check=*idxfile* : checks the bloom file (counterintuitively outfile) against the specified `.idx` file, first checks that the bloom filter is claiming to contain the `.idx`, then checks that it does actually contain all of the objects in the `.idx`. Does not write anything and ignores the `-k` option. # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-cat-file.md000066400000000000000000000025551303127641400176450ustar00rootroot00000000000000% bup-cat-file(1) Bup %BUP_VERSION% % Rob Browning % %BUP_DATE% # NAME bup-cat-file - extract archive content (low-level) # SYNOPSIS bup cat-file [--meta|--bupm] <*path*> # DESCRIPTION `bup cat-file` extracts content associated with *path* from the archive and dumps it to standard output. If nothing special is requested, the actual data contained by *path* (which must be a regular file) will be dumped. # OPTIONS \--meta : retrieve the metadata entry associated with *path*. Note that currently this does not return the raw bytes for the entry recorded in the relevant .bupm in the archive, but rather a decoded and then re-encoded version. When that matters, it should be possible (though awkward) to use `--bupm` on the parent directory and then find the relevant entry in the output. \--bupm : retrieve the .bupm file associated with *path*, which must be a directory. # EXAMPLES # Retrieve the content of somefile. $ bup cat-file /foo/latest/somefile > somefile-content # Examine the metadata associated with something. $ bup cat-file --meta /foo/latest/something | bup meta -tvvf - # Examine the metadata for somedir, including the items it contains. $ bup cat-file --bupm /foo/latest/somedir | bup meta -tvvf - # SEE ALSO `bup-join`(1), `bup-meta`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-daemon.md000066400000000000000000000007541303127641400174230ustar00rootroot00000000000000% bup-daemon(1) Bup %BUP_VERSION% % Brandon Low % %BUP_DATE% # NAME bup-daemon - listens for connections and runs `bup server` # SYNOPSIS bup daemon [-l address] [-p port] # DESCRIPTION `bup daemon` is a simple bup server which listens on a socket and forks connections to `bup mux server` children. # OPTIONS -l, \--listen=*address* : the address or hostname to listen on -p, \--port=*port* : the port to listen on # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-damage.md000066400000000000000000000057111303127641400173740ustar00rootroot00000000000000% bup-damage(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-damage - randomly destroy blocks of a file # SYNOPSIS bup damage [-n count] [-s maxsize] [\--percent pct] [-S seed] [\--equal] \ # DESCRIPTION Use `bup damage` to deliberately destroy blocks in a `.pack` or `.idx` file (from `.bup/objects/pack`) to test the recovery features of `bup-fsck`(1) or other programs. *THIS PROGRAM IS EXTREMELY DANGEROUS AND WILL DESTROY YOUR DATA* `bup damage` is primarily useful for automated or manual tests of data recovery tools, to reassure yourself that the tools actually work. # OPTIONS -n, \--num=*numblocks* : the number of separate blocks to damage in each file (default 10). Note that it's possible for more than one damaged segment to fall in the same `bup-fsck`(1) recovery block, so you might not damage as many recovery blocks as you expect. If this is a problem, use `--equal`. -s, \--size=*maxblocksize* : the maximum size, in bytes, of each damaged block (default 1 unless `--percent` is specified). Note that because of the way `bup-fsck`(1) works, a multi-byte block could fall on the boundary between two recovery blocks, and thus damaging two separate recovery blocks. In small files, it's also possible for a damaged block to be larger than a recovery block. If these issues might be a problem, you should use the default damage size of one byte. \--percent=*maxblockpercent* : the maximum size, in percent of the original file, of each damaged block. If both `--size` and `--percent` are given, the maximum block size is the minimum of the two restrictions. You can use this to ensure that a given block will never damage more than one or two `git-fsck`(1) recovery blocks. -S, \--seed=*randomseed* : seed the random number generator with the given value. If you use this option, your tests will be repeatable, since the damaged block offsets, sizes, and contents will be the same every time. By default, the random numbers are different every time (so you can run tests in a loop and repeatedly test with different damage each time). \--equal : instead of choosing random offsets for each damaged block, space the blocks equally throughout the file, starting at offset 0. If you also choose a correct maximum block size, this can guarantee that any given damage block never damages more than one `git-fsck`(1) recovery block. (This is also guaranteed if you use `-s 1`.) # EXAMPLES # make a backup in case things go horribly wrong cp -pPR ~/.bup/objects/pack ~/bup-packs.bak # generate recovery blocks for all packs bup fsck -g # deliberately damage the packs bup damage -n 10 -s 1 -S 0 ~/.bup/objects/pack/*.{pack,idx} # recover from the damage bup fsck -r # SEE ALSO `bup-fsck`(1), `par2`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-drecurse.md000066400000000000000000000037641303127641400200000ustar00rootroot00000000000000% bup-drecurse(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-drecurse - recursively list files in your filesystem # SYNOPSIS bup drecurse [-x] [-q] [\--exclude *path*] \ [\--exclude-from *filename*] [\--exclude-rx *pattern*] \ [\--exclude-rx-from *filename*] [\--profile] \ # DESCRIPTION `bup drecurse` traverses files in the filesystem in a way similar to `find`(1). In most cases, you should use `find`(1) instead. This program is useful mainly for testing the file traversal algorithm used in `bup-index`(1). Note that filenames are returned in reverse alphabetical order, as in `bup-index`(1). This is important because you can't generate the hash of a parent directory until you have generated the hashes of all its children. When listing files in reverse order, the parent directory will come after its children, making this easy. # OPTIONS -x, \--xdev, \--one-file-system : don't cross filesystem boundaries -- though as with tar and rsync, the mount points themselves will still be reported. -q, \--quiet : don't print filenames as they are encountered. Useful when testing performance of the traversal algorithms. \--exclude=*path* : exclude *path* from the backup (may be repeated). \--exclude-from=*filename* : read --exclude paths from *filename*, one path per-line (may be repeated). Ignore completely empty lines. \--exclude-rx=*pattern* : exclude any path matching *pattern*. See `bup-index`(1) for details, but note that unlike index, drecurse will produce relative paths if the drecurse target is a relative path. (may be repeated). \--exclude-rx-from=*filename* : read --exclude-rx patterns from *filename*, one pattern per-line (may be repeated). Ignore completely empty lines. \--profile : print profiling information upon completion. Useful when testing performance of the traversal algorithms. # EXAMPLES bup drecurse -x / # SEE ALSO `bup-index`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-fsck.md000066400000000000000000000070131303127641400171010ustar00rootroot00000000000000% bup-fsck(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-fsck - verify or repair a bup repository # SYNOPSIS bup fsck [-r] [-g] [-v] [\--quick] [-j *jobs*] [\--par2-ok] [\--disable-par2] [filenames...] # DESCRIPTION `bup fsck` is a tool for validating bup repositories in the same way that `git fsck` validates git repositories. It can also generate and/or use "recovery blocks" using the `par2`(1) tool (if you have it installed). This allows you to recover from damaged blocks covering up to 5% of your `.pack` files. In a normal backup system, damaged blocks are less important, because there tends to be enough data duplicated between backup sets that a single damaged backup set is non-critical. In a deduplicating backup system like bup, however, no block is ever stored more than once, even if it is used in every single backup. If that block were to be unrecoverable, *all* your backup sets would be damaged at once. Thus, it's important to be able to verify the integrity of your backups and recover from disk errors if they occur. *WARNING*: bup fsck's recovery features are not available unless you have the free `par2`(1) package installed on your bup server. *WARNING*: bup fsck obviously cannot recover from a complete disk failure. If your backups are important, you need to carefully consider redundancy (such as using RAID for multi-disk redundancy, or making off-site backups for site redundancy). # OPTIONS -r, \--repair : attempt to repair any damaged packs using existing recovery blocks. (Requires `par2`(1).) -g, \--generate : generate recovery blocks for any packs that don't already have them. (Requires `par2`(1).) -v, \--verbose : increase verbosity (can be used more than once). \--quick : don't run a full `git verify-pack` on each pack file; instead just check the final checksum. This can cause a significant speedup with no obvious decrease in reliability. However, you may want to avoid this option if you're paranoid. Has no effect on packs that already have recovery information. -j, \--jobs=*numjobs* : maximum number of pack verifications to run at a time. The optimal value for this option depends how fast your CPU can verify packs vs. your disk throughput. If you run too many jobs at once, your disk will get saturated by seeking back and forth between files and performance will actually decrease, even if *numjobs* is less than the number of CPU cores on your system. You can experiment with this option to find the optimal value. \--par2-ok : immediately return 0 if `par2`(1) is installed and working, or 1 otherwise. Do not actually check anything. \--disable-par2 : pretend that `par2`(1) is not installed, and ignore all recovery blocks. # EXAMPLES # generate recovery blocks for all packs that don't # have them bup fsck -g # generate recovery blocks for a particular pack bup fsck -g ~/.bup/objects/pack/153a1420cb1c8*.pack # check all packs for correctness (can be very slow!) bup fsck # check all packs for correctness and recover any # damaged ones bup fsck -r # check a particular pack for correctness and recover # it if damaged bup fsck -r ~/.bup/objects/pack/153a1420cb1c8*.pack # check if recovery blocks are available on this system if bup fsck --par2-ok; then echo "par2 is ok" fi # SEE ALSO `bup-damage`(1), `fsck`(1), `git-fsck`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-ftp.md000066400000000000000000000037111303127641400167450ustar00rootroot00000000000000% bup-ftp(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-ftp - ftp-like client for navigating bup repositories # SYNOPSIS bup ftp # DESCRIPTION `bup ftp` is a command-line tool for navigating bup repositories. It has commands similar to the Unix `ftp`(1) command. The file hierarchy is the same as that shown by `bup-fuse`(1) and `bup-ls`(1). Note: if your system has the python-readline library installed, you can use the \ key to complete filenames while navigating your backup data. This will save you a lot of typing. # COMMANDS The following commands are available inside `bup ftp`: ls [-s] [-a] [*path*] : print the contents of a directory. If no path argument is given, the current directory's contents are listed. If -a is given, also include hidden files (files which start with a `.` character). If -s is given, each file is displayed with its hash from the bup archive to its left. cd *dirname* : change to a different working directory pwd : print the path of the current working directory cat *filenames...* : print the contents of one or more files to stdout get *filename* *localname* : download the contents of *filename* and save it to disk as *localname*. If *localname* is omitted, uses *filename* as the local name. mget *filenames...* : download the contents of the given *filenames* and stores them to disk under the same names. The filenames may contain Unix filename globs (`*`, `?`, etc.) help : print a list of available commands quit : exit the `bup ftp` client # EXAMPLES $ bup ftp bup> ls mybackup/ yourbackup/ bup> cd mybackup/ bup> ls 2010-02-05-185507@ 2010-02-05-185508@ latest@ bup> cd latest/ bup> ls (...etc...) bup> get myfile Saving 'myfile' bup> quit # SEE ALSO `bup-fuse`(1), `bup-ls`(1), `bup-save`(1), `bup-restore`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-fuse.md000066400000000000000000000032631303127641400171200ustar00rootroot00000000000000% bup-fuse(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-fuse - mount a bup repository as a filesystem # SYNOPSIS bup fuse [-d] [-f] [-o] \ # DESCRIPTION `bup fuse` opens a bup repository and exports it as a `fuse`(7) userspace filesystem. This feature is only available on systems (such as Linux) which support FUSE. **WARNING**: bup fuse is still experimental and does not enforce any file permissions! All files will be readable by all users. When you're done accessing the mounted fuse filesystem, you should unmount it with `umount`(8). # OPTIONS -d, \--debug : run in the foreground and print FUSE debug information for each request. -f, \--foreground : run in the foreground and exit only when the filesystem is unmounted. -o, \--allow-other : permit other users to access the filesystem. Necessary for exporting the filesystem via Samba, for example. \--meta : report some of the original metadata (when available) for the mounted paths (currently the uid, gid, mode, and timestamps). Without this, only generic values will be presented. This option is not yet enabled by default because it may negatively affect performance, and note that any timestamps before 1970-01-01 UTC (i.e. before the Unix epoch) will be presented as 1970-01-01 UTC. -v, \--verbose : increase verbosity (can be used more than once). # EXAMPLES rm -rf /tmp/buptest mkdir /tmp/buptest sudo bup fuse -d /tmp/buptest ls /tmp/buptest/*/latest ... umount /tmp/buptest # SEE ALSO `fuse`(7), `fusermount`(1), `bup-ls`(1), `bup-ftp`(1), `bup-restore`(1), `bup-web`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-gc.md000066400000000000000000000036661303127641400165560ustar00rootroot00000000000000% bup-gc(1) Bup %BUP_VERSION% % Rob Browning % %BUP_DATE% # NAME bup-gc - remove unreferenced, unneeded data (CAUTION: EXPERIMENTAL) # SYNOPSIS bup gc [-#|--verbose] <*branch*|*save*...> # DESCRIPTION `bup gc` removes (permanently deletes) unreachable data from the repository, data that isn't referred to directly or indirectly by the current set of branches (backup sets) and tags. But bear in mind that given deduplication, deleting a save and running the garbage collector might or might not actually delete anything (or reclaim any space). With the current, proababilistic implementation, some fraction of the unreachable data may be retained. In exchange, the garbage collection should require much less RAM than might by some more precise approaches. Typically, the garbage collector would be invoked after some set of invocations of `bup rm`. WARNING: This is one of the few bup commands that modifies your archive in intentionally destructive ways. Though if an attempt to `join` or `restore` the data you still care about after a `gc` succeeds, that's a fairly encouraging sign that the commands worked correctly. (The `t/compare-trees` command in the source tree can be used to help test before/after results.) # OPTIONS \--threshold=N : only rewrite a packfile if it's over N percent garbage; otherwise leave it alone. The default threshold is 10%. -v, \--verbose : increase verbosity (can be used more than once). With one -v, bup prints every directory name as it gets backed up. With two -v, it also prints every filename. -*#*, \--compress=*#* : set the compression level to # (a value from 0-9, where 9 is the highest and 0 is no compression). The default is 1 (fast, loose compression). # EXAMPLES # Remove all saves of "home" and most of the otherwise unreferenced data. $ bup rm home $ bup gc # SEE ALSO `bup-rm`(1) and `bup-fsck`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-help.md000066400000000000000000000010641303127641400171030ustar00rootroot00000000000000% bup-help(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-help - open the documentation for a given bup command # SYNOPSIS bup help \ # DESCRIPTION `bup help ` opens the documentation for the given command. This is currently equivalent to typing `man bup-`. # EXAMPLES $ bup help help (Imagine that this man page was pasted below, recursively. Since that would cause an endless loop we include this silly remark instead. Chicken.) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-import-duplicity.md000066400000000000000000000027721303127641400215000ustar00rootroot00000000000000% bup-import-duplicity(1) Bup %BUP_VERSION% % Zoran Zaric , Rob Browning % %BUP_DATE% # NAME bup-import-duplicity - import duplicity backups # WARNING bup-import-duplicity is **EXPERIMENTAL** (proceed with caution) # SYNOPSIS bup import-duplicity [-n] \ \ # DESCRIPTION `bup import-duplicity` imports all of the duplicity backups at `source-url` into `bup` via `bup save -n save-name`. The bup saves will have the same timestamps (via `bup save --date`) as the original backups. Because this command operates by restoring each duplicity backup to a temporary directory, the extent to which the metadata is preserved will depend on the characteristics of the underlying filesystem, whether or not you run `import-duplicity` as root (or under `fakeroot`(1)), etc. Note that this command will use [`mkdtemp`][mkdtemp] to create temporary directories, which means that it should respect any `TEMPDIR`, `TEMP`, or `TMP` environment variable settings. Make sure that the relevant filesystem has enough space for the largest duplicity backup being imported. Since all invocations of duplicity use a temporary `--archive-dir`, `import-duplicity` should not affect ~/.cache/duplicity. # OPTIONS -n,--dry-run : don't do anything; just print out what would be done # EXAMPLES $ bup import-duplicity file:///duplicity/src/ legacy-duplicity # BUP Part of the `bup`(1) suite. [mkdtemp]: https://docs.python.org/2/library/tempfile.html#tempfile.mkdtemp bup-0.29/Documentation/bup-import-rdiff-backup.md000066400000000000000000000011561303127641400220220ustar00rootroot00000000000000% bup-import-rdiff-backup(1) Bup %BUP_VERSION% % Zoran Zaric % %BUP_DATE% # NAME bup-import-rdiff-backup - import a rdiff-backup archive # SYNOPSIS bup import-rdiff-backup [-n] # DESCRIPTION `bup import-rdiff-backup` imports a rdiff-backup archive. The timestamps for the backups are preserved and the path to the rdiff-backup archive is stripped from the paths. # OPTIONS -n,--dry-run : don't do anything just print out what would be done # EXAMPLES $ bup import-rdiff-backup /.snapshots legacy-rdiff-backup # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-import-rsnapshot.md000066400000000000000000000013421303127641400215030ustar00rootroot00000000000000% bup-import-rsnapshot(1) Bup %BUP_VERSION% % Zoran Zaric % %BUP_DATE% # NAME bup-import-rsnapshot - import a rsnapshot archive # SYNOPSIS bup import-rsnapshot [-n] \ [\] # SYNOPSIS `bup import-rsnapshot` imports an rsnapshot archive. The timestamps for the backups are preserved and the path to the rsnapshot archive is stripped from the paths. `bup import-rsnapshot` either imports the whole archive or imports all backups only for a given backuptarget. # OPTIONS -n, \--dry-run : don't do anything just print out what would be done # EXAMPLES $ bup import-rsnapshot /.snapshots $ bup import-rsnapshot /.snapshots host1 # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-index.md000066400000000000000000000166161303127641400172730ustar00rootroot00000000000000% bup-index(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-index - print and/or update the bup filesystem index # SYNOPSIS bup index \<-p|-m|-s|-u|\--clear|\--check\> [-H] [-l] [-x] [\--fake-valid] [\--no-check-device] [\--fake-invalid] [-f *indexfile*] [\--exclude *path*] [\--exclude-from *filename*] [\--exclude-rx *pattern*] [\--exclude-rx-from *filename*] [-v] \ # DESCRIPTION `bup index` manipulates the filesystem index, which is a cache of absolute paths and their metadata (atttributes, SHA-1 hashes, etc.). The bup index is similar in function to the `git`(1) index, and the default index can be found in `$BUP_DIR/bupindex`. Creating a backup in bup consists of two steps: updating the index with `bup index`, then actually backing up the files (or a subset of the files) with `bup save`. The separation exists for these reasons: 1. There is more than one way to generate a list of files that need to be backed up. For example, you might want to use `inotify`(7) or `dnotify`(7). 2. Even if you back up files to multiple destinations (for added redundancy), the file names, attributes, and hashes will be the same each time. Thus, you can save the trouble of repeatedly re-generating the list of files for each backup set. 3. You may want to use the data tracked by bup index for other purposes (such as speeding up other programs that need the same information). # NOTES At the moment, bup will ignore Linux attributes (cf. chattr(1) and lsattr(1)) on some systems (any big-endian systems where sizeof(long) < sizeof(int)). This is because the Linux kernel and FUSE currently disagree over the type of the attr system call arguments, and so on big-endian systems there's no way to get the results without the risk of stack corruption (http://lwn.net/Articles/575846/). In these situations, bup will print a warning the first time Linux attrs are relevant during any index/save/restore operation. bup makes accommodations for the expected "worst-case" filesystem timestamp resolution -- currently one second; examples include VFAT, ext2, ext3, small ext4, etc. Since bup cannot know the filesystem timestamp resolution, and could be traversing multiple filesystems during any given run, it always assumes that the resolution may be no better than one second. As a practical matter, this means that index updates are a bit imprecise, and so `bup save` may occasionally record filesystem changes that you didn't expect. That's because, during an index update, if bup encounters a path whose actual timestamps are more recent than one second before the update started, bup will set the index timestamps for that path (mtime and ctime) to exactly one second before the run, -- effectively capping those values. This ensures that no subsequent changes to those paths can result in timestamps that are identical to those in the index. If that were possible, bup could overlook the modifications. You can see the effect of this behavior in this example (assume that less than one second elapses between the initial file creation and first index run): $ touch src/1 src/2 # A "sleep 1" here would avoid the unexpected save. $ bup index src $ bup save -n src src # Saves 1 and 2. $ date > src/1 $ bup index src $ date > src/2 # Not indexed. $ bup save -n src src # But src/2 is saved anyway. Strictly speaking, bup should not notice the change to src/2, but it does, due to the accommodations described above. # MODES -u, \--update : recursively update the index for the given paths and their descendants. One or more paths must be specified, and if a path ends with a symbolic link, the link itself will be indexed, not the target. If no mode option is given, `--update` is the default, and paths may be excluded by the `--exclude`, `--exclude-rx`, and `--one-file-system` options. -p, \--print : print the contents of the index. If paths are given, shows the given entries and their descendants. If no paths are given, shows the entries starting at the current working directory (.). -m, \--modified : prints only files which are marked as modified (ie. changed since the most recent backup) in the index. Implies `-p`. -s, \--status : prepend a status code (A, M, D, or space) before each path. Implies `-p`. The codes mean, respectively, that a file is marked in the index as added, modified, deleted, or unchanged since the last backup. \--check : carefully check index file integrity before and after updating. Mostly useful for automated tests. \--clear : clear the default index. # OPTIONS -H, \--hash : for each file printed, prepend the most recently recorded hash code. The hash code is normally generated by `bup save`. For objects which have not yet been backed up, the hash code will be 0000000000000000000000000000000000000000. Note that the hash code is printed even if the file is known to be modified or deleted in the index (ie. the file on the filesystem no longer matches the recorded hash). If this is a problem for you, use `--status`. -l, \--long : print more information about each file, in a similar format to the `-l` option to `ls`(1). -x, \--xdev, \--one-file-system : don't cross filesystem boundaries when traversing the filesystem -- though as with tar and rsync, the mount points themselves will still be indexed. Only applicable if you're using `-u`. \--fake-valid : mark specified paths as up-to-date even if they aren't. This can be useful for testing, or to avoid unnecessarily backing up files that you know are boring. \--fake-invalid : mark specified paths as not up-to-date, forcing the next "bup save" run to re-check their contents. -f, \--indexfile=*indexfile* : use a different index filename instead of `$BUP_DIR/bupindex`. \--exclude=*path* : exclude *path* from the backup (may be repeated). \--exclude-from=*filename* : read --exclude paths from *filename*, one path per-line (may be repeated). Ignore completely empty lines. \--exclude-rx=*pattern* : exclude any path matching *pattern*, which must be a Python regular expression (http://docs.python.org/library/re.html). The pattern will be compared against the full path, without anchoring, so "x/y" will match "ox/yard" or "box/yards". To exclude the contents of /tmp, but not the directory itself, use "^/tmp/.". (may be repeated) Examples: * '/foo$' - exclude any file named foo * '/foo/$' - exclude any directory named foo * '/foo/.' - exclude the content of any directory named foo * '^/tmp/.' - exclude root-level /tmp's content, but not /tmp itself \--exclude-rx-from=*filename* : read --exclude-rx patterns from *filename*, one pattern per-line (may be repeated). Ignore completely empty lines. \--no-check-device : don't mark an entry invalid if the device number (stat(2) st_dev) changes. This can be useful when indexing remote, automounted, or snapshot filesystems (LVM, Btrfs, etc.), where the device number isn't fixed. -v, \--verbose : increase log output during update (can be used more than once). With one `-v`, print each directory as it is updated; with two `-v`, print each file too. # EXAMPLES bup index -vux /etc /var /usr # SEE ALSO `bup-save`(1), `bup-drecurse`(1), `bup-on`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-init.md000066400000000000000000000015551303127641400171230ustar00rootroot00000000000000% bup-init(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-init - initialize a bup repository # SYNOPSIS [BUP_DIR=*localpath*] bup init [-r *host*:*path*] # DESCRIPTION `bup init` initializes your local bup repository. By default, BUP_DIR is `~/.bup`. # OPTIONS -r, \--remote=*host*:*path* : Initialize not only the local repository, but also the remote repository given by the *host* and *path*. This is not necessary if you intend to back up to the default location on the server (ie. a blank *path*). The connection to the remote server is made with SSH. If you'd like to specify which port, user or private key to use for the SSH connection, we recommend you use the `~/.ssh/config` file. # EXAMPLES bup init # SEE ALSO `bup-fsck`(1), `ssh_config`(5) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-join.md000066400000000000000000000031531303127641400171130ustar00rootroot00000000000000% bup-join(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-join - concatenate files from a bup repository # SYNOPSIS bup join [-r *host*:*path*] [refs or hashes...] # DESCRIPTION `bup join` is roughly the opposite operation to `bup-split`(1). You can use it to retrieve the contents of a file from a local or remote bup repository. The supplied list of refs or hashes can be in any format accepted by `git`(1), including branch names, commit ids, tree ids, or blob ids. If no refs or hashes are given on the command line, `bup join` reads them from stdin instead. # OPTIONS -r, \--remote=*host*:*path* : Retrieves objects from the given remote repository instead of the local one. *path* may be blank, in which case the default remote repository is used. The connection to the remote server is made with SSH. If you'd like to specify which port, user or private key to use for the SSH connection, we recommend you use the `~/.ssh/config` file. Even though the data source is remote, a local bup repository is still required. # EXAMPLES # split and then rejoin a file using its tree id TREE=$(tar -cvf - /etc | bup split -t) bup join $TREE | tar -tf - # make two backups, then get the second-most-recent. # mybackup~1 is git(1) notation for the second most # recent commit on the branch named mybackup. tar -cvf - /etc | bup split -n mybackup tar -cvf - /etc | bup split -n mybackup bup join mybackup~1 | tar -tf - # SEE ALSO `bup-split`(1), `bup-save`(1), `bup-cat-file`, `ssh_config`(5) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-ls.md000066400000000000000000000042271303127641400165750ustar00rootroot00000000000000% bup-ls(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-ls - list the contents of a bup repository # SYNOPSIS bup ls [OPTION...] \ # DESCRIPTION `bup ls` lists files and directories in your bup repository using the same directory hierarchy as they would have with `bup-fuse`(1). The top level directory contains the branch (corresponding to the `-n` option in `bup save`), the next level is the date of the backup, and subsequent levels correspond to files in the backup. When `bup ls` is asked to output on a tty, and `-l` is not specified, it formats the output in columns so it can list as much as possible in as few lines as possible. However, when `-l` is specified or bup is asked to output to something other than a tty (say you pipe the output to another command, or you redirect it to a file), it will print one file name per line. This makes the listing easier to parse with external tools. Note that `bup ls` doesn't show hidden files by default and one needs to use the `-a` option to show them. Files are hidden when their name begins with a dot. For example, on the topmost level, the special directories named `.commit` and `.tag` are hidden directories. Once you have identified the file you want using `bup ls`, you can view its contents using `bup join` or `git show`. # OPTIONS -s, \--hash : show hash for each file/directory. -a, \--all : show hidden files. -A, \--almost-all : show hidden files, except "." and "..". -d, \--directory : show information about directories themselves, rather than their contents, and don't follow symlinks. -l : provide a detailed, long listing for each item. -F, \--classify : append type indicator: dir/, symlink@, fifo|, socket=, and executable*. \--file-type : append type indicator: dir/, symlink@, fifo|, socket=. \--human-readable : print human readable file sizes (i.e. 3.9K, 4.7M). \--numeric-ids : display numeric IDs (user, group, etc.) rather than names. # EXAMPLES bup ls /myserver/latest/etc/profile bup ls -a / # SEE ALSO `bup-join`(1), `bup-fuse`(1), `bup-ftp`(1), `bup-save`(1), `git-show`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-margin.md000066400000000000000000000043321303127641400174310ustar00rootroot00000000000000% bup-margin(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-margin - figure out your deduplication safety margin # SYNOPSIS bup margin [options...] # DESCRIPTION `bup margin` iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, `n`, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and `bup margin` returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running `bup margin` occasionally to see if you're getting dangerously close to 160 bits. # OPTIONS \--predict : Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. \--ignore-midx : don't use `.midx` files, use only `.idx` files. This is only really useful when used with `--predict`. # EXAMPLES $ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) # SEE ALSO `bup-midx`(1), `bup-save`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-memtest.md000066400000000000000000000113421303127641400176310ustar00rootroot00000000000000% bup-memtest(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-memtest - test bup memory usage statistics # SYNOPSIS bup memtest [options...] # DESCRIPTION `bup memtest` opens the list of pack indexes in your bup repository, then searches the list for a series of nonexistent objects, printing memory usage statistics after each cycle. Because of the way Unix systems work, the output will usually show a large (and unchanging) value in the VmSize column, because mapping the index files in the first place takes a certain amount of virtual address space. However, this virtual memory usage is entirely virtual; it doesn't take any of your RAM. Over time, bup uses *parts* of the indexes, which need to be loaded from disk, and this is what causes an increase in the VmRSS column. # OPTIONS -n, \--number=*number* : set the number of objects to search for during each cycle (ie. before printing a line of output) -c, \--cycles=*cycles* : set the number of cycles (ie. the number of lines of output after the first). The first line of output is always 0 (ie. the baseline before searching for any objects). \--ignore-midx : ignore any `.midx` files created by `bup midx`. This allows you to compare memory performance with and without using midx. \--existing : search for existing objects instead of searching for random nonexistent ones. This can greatly affect memory usage and performance. Note that most of the time, `bup save` spends most of its time searching for nonexistent objects, since existing ones are probably in unmodified files that we won't be trying to back up anyway. So the default behaviour reflects real bup performance more accurately. But you might want this option anyway just to make sure you haven't made searching for existing objects much worse than before. # EXAMPLES $ bup memtest -n300 -c5 PackIdxList: using 1 index. VmSize VmRSS VmData VmStk 0 20824 kB 4528 kB 1980 kB 84 kB 300 20828 kB 5828 kB 1984 kB 84 kB 600 20828 kB 6844 kB 1984 kB 84 kB 900 20828 kB 7836 kB 1984 kB 84 kB 1200 20828 kB 8736 kB 1984 kB 84 kB 1500 20828 kB 9452 kB 1984 kB 84 kB $ bup memtest -n300 -c5 --ignore-midx PackIdxList: using 361 indexes. VmSize VmRSS VmData VmStk 0 27444 kB 6552 kB 2516 kB 84 kB 300 27448 kB 15832 kB 2520 kB 84 kB 600 27448 kB 17220 kB 2520 kB 84 kB 900 27448 kB 18012 kB 2520 kB 84 kB 1200 27448 kB 18388 kB 2520 kB 84 kB 1500 27448 kB 18556 kB 2520 kB 84 kB # DISCUSSION When optimizing bup indexing, the first goal is to keep the VmRSS reasonably low. However, it might eventually be necessary to swap in all the indexes, simply because you're searching for a lot of objects, and this will cause your RSS to grow as large as VmSize eventually. The key word here is *eventually*. As long as VmRSS grows reasonably slowly, the amount of disk activity caused by accessing pack indexes is reasonably small. If it grows quickly, bup will probably spend most of its time swapping index data from disk instead of actually running your backup, so backups will run very slowly. The purpose of `bup memtest` is to give you an idea of how fast your memory usage is growing, and to help in optimizing bup for better memory use. If you have memory problems you might be asked to send the output of `bup memtest` to help diagnose the problems. Tip: try using `bup midx -a` or `bup midx -f` to see if it helps reduce your memory usage. Trivia: index memory usage in bup (or git) is only really a problem when adding a large number of previously unseen objects. This is because for each object, we need to absolutely confirm that it isn't already in the database, which requires us to search through *all* the existing pack indexes to ensure that none of them contain the object in question. In the more obvious case of searching for objects that *do* exist, the objects being searched for are typically related in some way, which means they probably all exist in a small number of packfiles, so memory usage will be constrained to just those packfile indexes. Since git users typically don't add a lot of files in a single run, git doesn't really need a program like `bup midx`. bup, on the other hand, spends most of its time backing up files it hasn't seen before, so its memory usage patterns are different. # SEE ALSO `bup-midx`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-meta.md000066400000000000000000000107231303127641400171030ustar00rootroot00000000000000% bup-meta(1) Bup %BUP_VERSION% % Rob Browning % %BUP_DATE% # NAME bup-meta - create or extract a metadata archive # SYNOPSIS bup meta \--create ~ [-R] [-v] [-q] [\--no-symlinks] [\--no-paths] [-f *file*] \<*paths*...\> bup meta \--list ~ [-v] [-q] [-f *file*] bup meta \--extract ~ [-v] [-q] [\--numeric-ids] [\--no-symlinks] [-f *file*] bup meta \--start-extract ~ [-v] [-q] [\--numeric-ids] [\--no-symlinks] [-f *file*] bup meta \--finish-extract ~ [-v] [-q] [\--numeric-ids] [-f *file*] bup meta \--edit ~ [\--set-uid *uid* | \--set-gid *gid* | \--set-user *user* | \--set-group *group* | ...] \<*paths*...\> # DESCRIPTION `bup meta` creates, extracts, or otherwise manipulates metadata archives. A metadata archive contains the metadata information (timestamps, ownership, access permissions, etc.) for a set of filesystem paths. See `bup-restore`(1) for a description of the way ownership metadata is restored. # OPTIONS -c, \--create : Create a metadata archive for the specified *path*s. Write the archive to standard output unless `--file` is specified. -t, \--list : Display information about the metadata in an archive. Read the archive from standard input unless `--file` is specified. -x, \--extract : Extract a metadata archive. Conceptually, perform `--start-extract` followed by `--finish-extract`. Read the archive from standard input unless `--file` is specified. \--start-extract : Build a filesystem tree matching the paths stored in a metadata archive. By itself, this command does not produce a full restoration of the metadata. For a full restoration, this command must be followed by a call to `--finish-extract`. Once this command has finished, all of the normal files described by the metadata will exist and be empty. Restoring the data in those files, and then calling `--finish-extract` should restore the original tree. The archive will be read from standard input unless `--file` is specified. \--finish-extract : Finish applying the metadata stored in an archive to the filesystem. Normally, this command should follow a call to `--start-extract`. The archive will be read from standard input unless `--file` is specified. \--edit : Edit metadata archives. The result will be written to standard output unless `--file` is specified. -f, \--file=*filename* : Read the metadata archive from *filename* or write it to *filename* as appropriate. If *filename* is "-", then read from standard input or write to standard output. -R, \--recurse : Recursively descend into subdirectories during `--create`. \--xdev, \--one-file-system : don't cross filesystem boundaries -- though as with tar and rsync, the mount points themselves will still be handled. \--numeric-ids : Apply numeric IDs (user, group, etc.) rather than names during `--extract` or `--finish-extract`. \--symlinks : Record symbolic link targets when creating an archive, or restore symbolic links when extracting an archive (during `--extract` or `--start-extract`). This option is enabled by default. Specify `--no-symlinks` to disable it. \--paths : Record pathnames when creating an archive. This option is enabled by default. Specify `--no-paths` to disable it. \--set-uid=*uid* : Set the metadata uid to the integer *uid* during `--edit`. \--set-gid=*gid* : Set the metadata gid to the integer *gid* during `--edit`. \--set-user=*user* : Set the metadata user to *user* during `--edit`. \--unset-user : Remove the metadata user during `--edit`. \--set-group=*group* : Set the metadata user to *group* during `--edit`. \--unset-group : Remove the metadata group during `--edit`. -v, \--verbose : Be more verbose (can be used more than once). -q, \--quiet : Be quiet. # EXAMPLES # Create a metadata archive for /etc. $ bup meta -cRf etc.meta /etc bup: removing leading "/" from "/etc" # Extract the etc.meta archive (files will be empty). $ mkdir tmp && cd tmp $ bup meta -xf ../etc.meta $ ls etc # Restore /etc completely. $ mkdir tmp && cd tmp $ bup meta --start-extract -f ../etc.meta ...fill in all regular file contents using some other tool... $ bup meta --finish-extract -f ../etc.meta # Change user/uid to root. $ bup meta --edit --set-uid 0 --set-user root \ src.meta > dest.meta # BUGS Hard links are not handled yet. # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-midx.md000066400000000000000000000065131303127641400171200ustar00rootroot00000000000000% bup-midx(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-midx - create a multi-index (`.midx`) file from several `.idx` files # SYNOPSIS bup midx [-o *outfile*] \<-a|-f|*idxnames*...\> # DESCRIPTION `bup midx` creates a multi-index (`.midx`) file from one or more git pack index (`.idx`) files. Note: you should no longer need to run this command by hand. It gets run automatically by `bup-save`(1) and similar commands. # OPTIONS -o, \--output=*filename.midx* : use the given output filename for the `.midx` file. Default is auto-generated. -a, \--auto : automatically generate new `.midx` files for any `.idx` files where it would be appropriate. -f, \--force : force generation of a single new `.midx` file containing *all* your `.idx` files, even if other `.midx` files already exist. This will result in the fastest backup performance, but may take a long time to run. \--dir=*packdir* : specify the directory containing the `.idx`/`.midx` files to work with. The default is $BUP_DIR/objects/pack and $BUP_DIR/indexcache/*. \--max-files : maximum number of `.idx` files to open at a time. You can use this if you have an especially small number of file descriptors available, so that midx can complete (though possibly non-optimally) even if it can't open all your `.idx` files at once. The default value of this option should be fine for most people. \--check : validate a `.midx` file by ensuring that all objects in its contained `.idx` files exist inside the `.midx`. May be useful for debugging. # EXAMPLES $ bup midx -a Merging 21 indexes (2278559 objects). Table size: 524288 (17 bits) Reading indexes: 100.00% (2278559/2278559), done. midx-b66d7c9afc4396187218f2936a87b865cf342672.midx # DISCUSSION By default, bup uses git-formatted pack files, which consist of a pack file (containing objects) and an idx file (containing a sorted list of object names and their offsets in the .pack file). Normal idx files are convenient because it means you can use `git`(1) to access your backup datasets. However, idx files can get slow when you have a lot of very large packs (which git typically doesn't have, but bup often does). bup `.midx` files consist of a single sorted list of all the objects contained in all the .pack files it references. This list can be binary searched in about log2(m) steps, where m is the total number of objects. To further speed up the search, midx files also have a variable-sized fanout table that reduces the first n steps of the binary search. With the help of this fanout table, bup can narrow down which page of the midx file a given object id would be in (if it exists) with a single lookup. Thus, typical searches will only need to swap in two pages: one for the fanout table, and one for the object id. midx files are most useful when creating new backups, since searching for a nonexistent object in the repository necessarily requires searching through *all* the index files to ensure that it does not exist. (Searching for objects that *do* exist can be optimized; for example, consecutive objects are often stored in the same pack, so we can search that one first using an MRU algorithm.) # SEE ALSO `bup-save`(1), `bup-margin`(1), `bup-memtest`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-mux.md000066400000000000000000000010211303127641400167550ustar00rootroot00000000000000% bup-mux(1) Bup %BUP_VERSION% % Brandon Low % %BUP_DATE% # NAME bup-mux - multiplexes data and error streams over a connection # SYNOPSIS bup mux \ [options...] # DESCRIPTION `bup mux` is used in the bup client-server protocol to send both data and debugging/error output over the single connection stream. `bup mux bup server` might be used in an inetd server setup. # OPTIONS command : the command to run options : options for the command # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-newliner.md000066400000000000000000000024561303127641400200040ustar00rootroot00000000000000% bup-newliner(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-newliner - make sure progress messages don't overlap with output # SYNOPSIS \ 2>&1 | bup newliner # DESCRIPTION `bup newliner` is run automatically by bup. You shouldn't need it unless you're using it in some other program. Progress messages emitted by bup (and some other tools) are of the form "Message ### content\\r", that is, a status message containing a variable-length number, followed by a carriage return character and no newline. If these messages are printed more than once, they overwrite each other, so what the user sees is a single line with a continually-updating number. This works fine until some other message is printed. For example, progress messages are usually printed to stderr, but other program messages might be printed to stdout. If those messages are shorter than the progress message line, the screen will be left with weird looking artifacts as the two messages get mixed together. `bup newliner` prints extra space characters at the right time to make sure that doesn't happen. If you're running a program that has problems with these artifacts, you can usually fix them by piping its stdout *and* its stderr through bup newliner. # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-on.md000066400000000000000000000045051303127641400165720ustar00rootroot00000000000000% bup-on(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-on - run a bup server locally and client remotely # SYNOPSIS bup on \ index ... bup on \ save ... bup on \ split ... # DESCRIPTION `bup on` runs the given bup command on the given host using ssh. It runs a bup server on the local machine, so that commands like `bup save` on the remote machine can back up to the local machine. (You don't need to provide a `--remote` option to `bup save` in order for this to work.) See `bup-index`(1), `bup-save`(1), and so on for details of how each subcommand works. This 'reverse mode' operation is useful when the machine being backed up isn't supposed to be able to ssh into the backup server. For example, your backup server can be hidden behind a one-way firewall on a private or dynamic IP address; using an ssh key, it can be authorized to ssh into each of your important machines. After connecting to each destination machine, it initiates a backup, receiving the resulting data and storing in its local repository. For example, if you run several virtual private Linux machines on a remote hosting provider, you could back them up to a local (much less expensive) computer in your basement. # EXAMPLES # First index the files on the remote server $ bup on myserver index -vux /etc bup server: reading from stdin. Indexing: 2465, done. bup: merging indexes (186668/186668), done. bup server: done # Now save the files from the remote server to the # local $BUP_DIR $ bup on myserver save -n myserver-backup /etc bup server: reading from stdin. bup server: command: 'list-indexes' PackIdxList: using 7 indexes. Saving: 100.00% (241/241k, 648/648 files), done. bup server: received 55 objects. Indexing objects: 100% (55/55), done. bup server: command: 'quit' bup server: done # Now we can look at the resulting repo on the local # machine $ bup ftp 'cat /myserver-backup/latest/etc/passwd' root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync ... # SEE ALSO `bup-index`(1), `bup-save`(1), `bup-split`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-prune-older.md000066400000000000000000000075631303127641400204210ustar00rootroot00000000000000% bup-prune-older(1) bup %BUP_VERSION% | bup %BUP_VERSION% % Rob Browning % %BUP_DATE% # NAME bup-prune-older - remove older saves (CAUTION: EXPERIMENTAL) # SYNOPSIS bup prune-older [options...] <*branch*...> # DESCRIPTION `bup prune-older` removes (permanently deletes) all saves except those preserved by the various keep arguments detailed below. At least one keep argument must be specified. This command is equivalent to a suitable `bup rm` invocation followed by `bup gc`. WARNING: This is one of the few bup commands that modifies your archive in intentionally destructive ways. Though if an attempt to `join` or `restore` the data you still care about after a `prune-older` succeeds, that's a fairly encouraging sign that the commands worked correctly. (The `t/compare-trees` command in the source tree can be used to help test before/after results.) # KEEP PERIODS A `--keep` PERIOD (as required below) must be an integer followed by a scale, or "forever". For example, 12y specifies a PERIOD of twelve years. Here are the valid scales: - s indicates seconds - min indicates minutes (60s) - h indicates hours (60m) - d indicates days (24h) - w indicates weeks (7d) - m indicates months (31d) - y indicates years (366d) - forever is infinitely far in the past As indicated, the PERIODS are computed with respect to the current time, or the `--wrt` value if specified, and do not respect any calendar, so `--keep-dailies-for 5d` means a period starting exactly 5 * 24 * 60 * 60 seconds before the starting point. # OPTIONS --keep-all-for PERIOD : when no smaller time scale --keep option applies, retain all saves within the given period. --keep-dailies-for PERIOD : when no smaller time scale --keep option applies, retain the oldest save for any day within the given period. --keep-monthlies-for PERIOD : when no smaller time scale --keep option applies, retain the oldest save for any month within the given period. --keep-yearlies-for PERIOD : when no smaller time scale --keep option applies, retain the oldest save for any year within the given period. --wrt UTC_SECONDS : when computing a keep period, place the most recent end of the range at UTC\_SECONDS, and any saves newer than this will be kept. --pretend : don't do anything, just list the actions that would be taken to standard output, one action per line like this: - SAVE + SAVE ... --gc : garbage collect the repository after removing the relevant saves. This is the default behavior, but it can be avoided with `--no-gc`. \--gc-threshold N : only rewrite a packfile if it's over N percent garbage; otherwise leave it alone. The default threshold is 10%. -*#*, \--compress *#* : set the compression level when rewriting archive data to # (a value from 0-9, where 9 is the highest and 0 is no compression). The default is 1 (fast, loose compression). -v, \--verbose : increase verbosity (can be specified more than once). # NOTES When `--verbose` is specified, the save periods will be summarized to standard error with lines like this: keeping monthlies since 1969-07-20-201800 keeping all yearlies ... It's possible that the current implementation might not be able to format the date if, for example, it is far enough back in time. In that case, you will see something like this: keeping yearlies since -30109891477 seconds before 1969-12-31-180000 ... # EXAMPLES # Keep all saves for the past month, and any older monthlies for # the past year. Delete everything else. $ bup prune-older --keep-all-for 1m --keep-monthlies-for 1y # Keep all saves for the past 6 months and delete everything else, # but only on the semester branch. $ bup prune-older --keep-all-for 6m semester # SEE ALSO `bup-rm`(1), `bup-gc`(1), and `bup-fsck`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-random.md000066400000000000000000000043261303127641400174370ustar00rootroot00000000000000% bup-random(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-random - generate a stream of random output # SYNOPSIS bup random [-S seed] [-fv] \ # DESCRIPTION `bup random` produces a stream of pseudorandom output bytes to stdout. Note: the bytes are *not* generated using a cryptographic algorithm and should never be used for security. Note that the stream of random bytes will be identical every time `bup random` is run, unless you provide a different `seed` value. This is intentional: the purpose of this program is to be able to run repeatable tests on large amounts of data, so we want identical data every time. `bup random` generates about 240 megabytes per second on a modern test system (Intel Core2), which is faster than you could achieve by reading data from most disks. Thus, it can be helpful when running microbenchmarks. # OPTIONS \ : the number of bytes of data to generate. Can be used with the suffices `k`, `M`, or `G` to indicate kilobytes, megabytes, or gigabytes, respectively. -S, \--seed=*seed* : use the given value to seed the pseudorandom number generator. The generated output stream will be identical for every stream seeded with the same value. The default seed is 1. A seed value of 0 is equivalent to 1. -f, \--force : generate output even if stdout is a tty. (Generating random data to a tty is generally considered ill-advised, but you can do if you really want.) -v, \--verbose : print a progress message showing the number of bytes that has been output so far. # EXAMPLES $ bup random 1k | sha1sum 2108c55d0a2687c8dacf9192677c58437a55db71 - $ bup random -S1 1k | sha1sum 2108c55d0a2687c8dacf9192677c58437a55db71 - $ bup random -S2 1k | sha1sum f71acb90e135d98dad7efc136e8d2cc30573e71a - $ time bup random 1G >/dev/null Random: 1024 Mbytes, done. real 0m4.261s user 0m4.048s sys 0m0.172s $ bup random 1G | bup split -t --bench Random: 1024 Mbytes, done. bup: 1048576.00kbytes in 18.59 secs = 56417.78 kbytes/sec 1092599b9c7b2909652ef1e6edac0796bfbfc573 # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-restore.md000066400000000000000000000220761303127641400176440ustar00rootroot00000000000000% bup-restore(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-restore - extract files from a backup set # SYNOPSIS bup restore [\--outdir=*outdir*] [\--exclude-rx *pattern*] [\--exclude-rx-from *filename*] [-v] [-q] \ # DESCRIPTION `bup restore` extracts files from a backup set (created with `bup-save`(1)) to the local filesystem. The specified *paths* are of the form /_branch_/_revision_/_some/where_. The components of the path are as follows: branch : the name of the backup set to restore from; this corresponds to the `--name` (`-n`) option to `bup save`. revision : the revision of the backup set to restore. The revision *latest* is always the most recent backup on the given branch. You can discover other revisions using `bup ls /branch`. some/where : the previously saved path (after any stripping/grafting) that you want to restore. For example, `etc/passwd`. If _some/where_ names a directory, `bup restore` will restore that directory and then recursively restore its contents. If _some/where_ names a directory and ends with a slash (ie. path/to/dir/), `bup restore` will restore the children of that directory directly to the current directory (or the `--outdir`). If _some/where_ does not end in a slash, the children will be restored to a subdirectory of the current directory. If _some/where_ names a directory and ends in '/.' (ie. path/to/dir/.), `bup restore` will do exactly what it would have done for path/to/dir, and then restore _dir_'s metadata to the current directory (or the `--outdir`). See the EXAMPLES section. Whenever path metadata is available, `bup restore` will attempt to restore it. When restoring ownership, bup implements tar/rsync-like semantics. It will normally prefer user and group names to uids and gids when they're available, but it will not try to restore the user unless running as root, and it will fall back to the numeric uid or gid whenever the metadata contains a user or group name that doesn't exist on the current system. The use of user and group names can be disabled via `--numeric-ids` (which can be important when restoring a chroot, for example), and as a special case, a uid or gid of 0 will never be remapped by name. Additionally, some systems don't allow setting a uid/gid that doesn't correspond with a known user/group. On those systems, bup will log an error for each relevant path. The `--map-user`, `--map-group`, `--map-uid`, `--map-gid` options may be used to adjust the available ownership information before any of the rules above are applied, but note that due to those rules, `--map-uid` and `--map-gid` will have no effect whenever a path has a valid user or group. In those cases, either `--numeric-ids` must be specified, or the user or group must be cleared by a suitable `--map-user foo=` or `--map-group foo=`. Hardlinks will also be restored when possible, but at least currently, no links will be made to targets outside the restore tree, and if the restore tree spans a different arrangement of filesystems from the save tree, some hardlink sets may not be completely restored. Also note that changing hardlink sets on disk between index and save may produce unexpected results. With the current implementation, bup will attempt to recreate any given hardlink set as it existed at index time, even if all of the files in the set weren't still hardlinked (but were otherwise identical) at save time. Note that during the restoration process, access to data within the restore tree may be more permissive than it was in the original source. Unless security is irrelevant, you must restore to a private subdirectory, and then move the resulting tree to its final position. See the EXAMPLES section for a demonstration. # OPTIONS -C, \--outdir=*outdir* : create and change to directory *outdir* before extracting the files. \--numeric-ids : restore numeric IDs (user, group, etc.) rather than names. \--exclude-rx=*pattern* : exclude any path matching *pattern*, which must be a Python regular expression (http://docs.python.org/library/re.html). The pattern will be compared against the full path rooted at the top of the restore tree, without anchoring, so "x/y" will match "ox/yard" or "box/yards". To exclude the contents of /tmp, but not the directory itself, use "^/tmp/.". (can be specified more than once) Note that the root of the restore tree (which matches '^/') is the top of the archive tree being restored, and has nothing to do with the filesystem destination. Given "restore ... /foo/latest/etc/", the pattern '^/passwd$' would match if a file named passwd had been saved as '/foo/latest/etc/passwd'. Examples: * '/foo$' - exclude any file named foo * '/foo/$' - exclude any directory named foo * '/foo/.' - exclude the content of any directory named foo * '^/tmp/.' - exclude root-level /tmp's content, but not /tmp itself \--exclude-rx-from=*filename* : read --exclude-rx patterns from *filename*, one pattern per-line (may be repeated). Ignore completely empty lines. \--sparse : write output data sparsely when reasonable. Currently, reasonable just means "at least whenever there are 512 or more consecutive zeroes". \--map-user *old*=*new* : for every path, restore the *old* (saved) user name as *new*. Specifying "" for *new* will clear the user. For example "--map-user foo=" will allow the uid to take effect for any path that originally had a user of "foo", unless countermanded by a subsequent "--map-user foo=..." specification. See DESCRIPTION above for further information. \--map-group *old*=*new* : for every path, restore the *old* (saved) group name as *new*. Specifying "" for *new* will clear the group. For example "--map-group foo=" will allow the gid to take effect for any path that originally had a group of "foo", unless countermanded by a subsequent "--map-group foo=..." specification. See DESCRIPTION above for further information. \--map-uid *old*=*new* : for every path, restore the *old* (saved) uid as *new*, unless countermanded by a subsequent "--map-uid *old*=..." option. Note that the uid will only be relevant for paths with no user. See DESCRIPTION above for further information. \--map-gid *old*=*new* : for every path, restore the *old* (saved) gid as *new*, unless countermanded by a subsequent "--map-gid *old*=..." option. Note that the gid will only be relevant for paths with no user. See DESCRIPTION above for further information. -v, \--verbose : increase log output. Given once, prints every directory as it is restored; given twice, prints every file and directory. -q, \--quiet : don't show the progress meter. Normally, is stderr is a tty, a progress display is printed that shows the total number of files restored. # EXAMPLES Create a simple test backup set: $ bup index -u /etc $ bup save -n mybackup /etc/passwd /etc/profile Restore just one file: $ bup restore /mybackup/latest/etc/passwd Restoring: 1, done. $ ls -l passwd -rw-r--r-- 1 apenwarr apenwarr 1478 2010-09-08 03:06 passwd Restore etc to test (no trailing slash): $ bup restore -C test /mybackup/latest/etc Restoring: 3, done. $ find test test test/etc test/etc/passwd test/etc/profile Restore the contents of etc to test (trailing slash): $ bup restore -C test /mybackup/latest/etc/ Restoring: 2, done. $ find test test test/passwd test/profile Restore the contents of etc and etc's metadata to test (trailing "/."): $ bup restore -C test /mybackup/latest/etc/. Restoring: 2, done. # At this point test and etc's metadata will match. $ find test test test/passwd test/profile Restore a tree without risk of unauthorized access: # mkdir --mode 0700 restore-tmp # bup restore -C restore-tmp /somebackup/latest/foo Restoring: 42, done. # mv restore-tmp/foo somewhere # rmdir restore-tmp Restore a tree, remapping an old user and group to a new user and group: # ls -l /original/y -rw-r----- 1 foo baz 3610 Nov 4 11:31 y # bup restore -C dest --map-user foo=bar --map-group baz=bax /x/latest/y Restoring: 42, done. # ls -l dest/y -rw-r----- 1 bar bax 3610 Nov 4 11:31 y Restore a tree, remapping an old uid to a new uid. Note that the old user must be erased so that bup won't prefer it over the uid: # ls -l /original/y -rw-r----- 1 foo baz 3610 Nov 4 11:31 y # ls -ln /original/y -rw-r----- 1 1000 1007 3610 Nov 4 11:31 y # bup restore -C dest --map-user foo= --map-uid 1000=1042 /x/latest/y Restoring: 97, done. # ls -ln dest/y -rw-r----- 1 1042 1007 3610 Nov 4 11:31 y An alternate way to do the same by quashing users/groups universally with `--numeric-ids`: # bup restore -C dest --numeric-ids --map-uid 1000=1042 /x/latest/y Restoring: 97, done. # SEE ALSO `bup-save`(1), `bup-ftp`(1), `bup-fuse`(1), `bup-web`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-rm.md000066400000000000000000000025021303127641400165670ustar00rootroot00000000000000% bup-rm(1) Bup %BUP_VERSION% % Rob Browning % %BUP_DATE% # NAME bup-rm - remove references to archive content (CAUTION: EXPERIMENTAL) # SYNOPSIS bup rm [-#|--verbose] <*branch*|*save*...> # DESCRIPTION `bup rm` removes the indicated *branch*es (backup sets) and *save*s. By itself, this command does not delete any actual data (nor recover any storage space), but it may make it very difficult or impossible to refer to the deleted items, unless there are other references to them (e.g. tags). A subsequent garbage collection, either by a `bup gc`, or by a normal `git gc`, may permanently delete data that is no longer reachable from the remaining branches or tags, and reclaim the related storage space. NOTE: This is one of the few bup commands that modifies your archive in intentionally destructive ways. # OPTIONS -v, \--verbose : increase verbosity (can be used more than once). -*#*, \--compress=*#* : set the compression level to # (a value from 0-9, where 9 is the highest and 0 is no compression). The default is 6. Note that `bup rm` may only write new commits. # EXAMPLES # Delete the backup set (branch) foo and a save in bar. $ bup rm /foo /bar/2014-10-21-214720 # SEE ALSO `bup-gc`(1), `bup-save`(1), `bup-fsck`(1), and `bup-tag`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-save.md000066400000000000000000000131721303127641400171140ustar00rootroot00000000000000% bup-save(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-save - create a new bup backup set # SYNOPSIS bup save [-r *host*:*path*] \<-t|-c|-n *name*\> [-#] [-f *indexfile*] [-v] [-q] [\--smaller=*maxsize*] \; # DESCRIPTION `bup save` saves the contents of the given files or paths into a new backup set and optionally names that backup set. Note that in order to refer to your backup set later (i.e. for restoration), you must either specify `--name` (the normal case), or record the tree or commit id printed by `--tree` or `--commit`. Before trying to save files using `bup save`, you should first update the index using `bup index`. The reasons for separating the two steps are described in the man page for `bup-index`(1). By default, metadata will be saved for every path, and the metadata for any unindexed parent directories of indexed paths will be taken directly from the filesystem. However, if `--strip`, `--strip-path`, or `--graft` is specified, metadata will not be saved for the root directory (*/*). See `bup-restore`(1) for more information about the handling of metadata. # OPTIONS -r, \--remote=*host*:*path* : save the backup set to the given remote server. If *path* is omitted, uses the default path on the remote server (you still need to include the ':'). The connection to the remote server is made with SSH. If you'd like to specify which port, user or private key to use for the SSH connection, we recommend you use the `~/.ssh/config` file. -t, \--tree : after creating the backup set, print out the git tree id of the resulting backup. -c, \--commit : after creating the backup set, print out the git commit id of the resulting backup. -n, \--name=*name* : after creating the backup set, create a git branch named *name* so that the backup can be accessed using that name. If *name* already exists, the new backup will be considered a descendant of the old *name*. (Thus, you can continually create new backup sets with the same name, and later view the history of that backup set to see how files have changed over time.) -d, \--date=*date* : specify the date of the backup, in seconds since the epoch, instead of the current time. -f, \--indexfile=*indexfile* : use a different index filename instead of `$BUP_DIR/bupindex`. -v, \--verbose : increase verbosity (can be used more than once). With one -v, prints every directory name as it gets backed up. With two -v, also prints every filename. -q, \--quiet : disable progress messages. \--smaller=*maxsize* : don't back up files >= *maxsize* bytes. You can use this to run frequent incremental backups of your small files, which can usually be backed up quickly, and skip over large ones (like virtual machine images) which take longer. Then you can back up the large files less frequently. Use a suffix like k, M, or G to specify multiples of 1024, 1024*1024, 1024*1024*1024 respectively. \--bwlimit=*bytes/sec* : don't transmit more than *bytes/sec* bytes per second to the server. This is good for making your backups not suck up all your network bandwidth. Use a suffix like k, M, or G to specify multiples of 1024, 1024*1024, 1024*1024*1024 respectively. \--strip : strips the path that is given from all files and directories. A directory */root/chroot/etc* saved with "bup save -n chroot \--strip /root/chroot" would be saved as */etc*. Note that currently, metadata will not be saved for the root directory (*/*) when this option is specified. \--strip-path=*path-prefix* : strips the given path prefix *path-prefix* from all files and directories. A directory */root/chroot/webserver* saved with "bup save -n webserver \--strip-path=/root/chroot" would be saved as */webserver/etc*. Note that currently, metadata will not be saved for the root directory (*/*) when this option is specified. \--graft=*old_path*=*new_path* : a graft point *old_path*=*new_path* (can be used more than once). A directory */root/chroot/a/etc* saved with "bup save -n chroot \--graft /root/chroot/a=/chroot/a" would be saved as */chroot/a/etc*. Note that currently, metadata will not be saved for the root directory (*/*) when this option is specified. -*#*, \--compress=*#* : set the compression level to # (a value from 0-9, where 9 is the highest and 0 is no compression). The default is 1 (fast, loose compression) # EXAMPLES $ bup index -ux /etc Indexing: 1981, done. $ bup save -r myserver: -n my-pc-backup --bwlimit=50k /etc Reading index: 1981, done. Saving: 100.00% (998/998k, 1981/1981 files), done. $ ls /home/joe/chroot/httpd bin var $ bup index -ux /home/joe/chroot/httpd Indexing: 1337, done. $ bup save --strip -n joes-httpd-chroot /home/joe/chroot/httpd Reading index: 1337, done. Saving: 100.00% (998/998k, 1337/1337 files), done. $ bup ls joes-httpd-chroot/latest/ bin/ var/ $ bup save --strip-path=/home/joe/chroot -n joes-chroot \ /home/joe/chroot/httpd Reading index: 1337, done. Saving: 100.00% (998/998k, 1337/1337 files), done. $ bup ls joes-chroot/latest/ httpd/ $ bup save --graft /home/joe/chroot/httpd=/http-chroot \ -n joe /home/joe/chroot/httpd Reading index: 1337, done. Saving: 100.00% (998/998k, 1337/1337 files), done. $ bup ls joe/latest/ http-chroot/ # SEE ALSO `bup-index`(1), `bup-split`(1), `bup-on`(1), `bup-restore`(1), `ssh_config`(5) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-server.md000066400000000000000000000025141303127641400174620ustar00rootroot00000000000000% bup-server(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-server - the server side of the bup client-server relationship # SYNOPSIS bup server # DESCRIPTION `bup server` is the server side of a remote bup session. If you use `bup-split`(1) or `bup-save`(1) with the `-r` option, they will ssh to the remote server and run `bup server` to receive the transmitted objects. There is normally no reason to run `bup server` yourself. # MODES smart : In this mode, the server checks each incoming object against the idx files in its repository. If any object already exists, it tells the client about the idx file it was found in, allowing the client to download that idx and avoid sending duplicate data. This is `bup-server`'s default mode. dumb : In this mode, the server will not check its local index before writing an object. To avoid writing duplicate objects, the server will tell the client to download all of its `.idx` files at the start of the session. This mode is useful on low powered server hardware (ie router/slow NAS). # FILES $BUP_DIR/bup-dumb-server : Activate dumb server mode, as discussed above. This file is not created by default in new repositories. # SEE ALSO `bup-save`(1), `bup-split`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-split.md000066400000000000000000000137401303127641400173120ustar00rootroot00000000000000% bup-split(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-split - save individual files to bup backup sets # SYNOPSIS bup split \[-t\] \[-c\] \[-n *name*\] COMMON\_OPTIONS bup split -b COMMON\_OPTIONS bup split \<--noop \[--copy\]|--copy\> COMMON\_OPTIONS COMMON\_OPTIONS ~ \[-r *host*:*path*\] \[-v\] \[-q\] \[-d *seconds-since-epoch*\] \[\--bench\] \[\--max-pack-size=*bytes*\] \[-#\] \[\--bwlimit=*bytes*\] \[\--max-pack-objects=*n*\] \[\--fanout=*count*\] \[\--keep-boundaries\] \[--git-ids | filenames...\] # DESCRIPTION `bup split` concatenates the contents of the given files (or if no filenames are given, reads from stdin), splits the content into chunks of around 8k using a rolling checksum algorithm, and saves the chunks into a bup repository. Chunks which have previously been stored are not stored again (ie. they are 'deduplicated'). Because of the way the rolling checksum works, chunks tend to be very stable across changes to a given file, including adding, deleting, and changing bytes. For example, if you use `bup split` to back up an XML dump of a database, and the XML file changes slightly from one run to the next, nearly all the data will still be deduplicated and the size of each backup after the first will typically be quite small. Another technique is to pipe the output of the `tar`(1) or `cpio`(1) programs to `bup split`. When individual files in the tarball change slightly or are added or removed, bup still processes the remainder of the tarball efficiently. (Note that `bup save` is usually a more efficient way to accomplish this, however.) To get the data back, use `bup-join`(1). # MODES These options select the primary behavior of the command, with -n being the most likely choice. -n, \--name=*name* : after creating the dataset, create a git branch named *name* so that it can be accessed using that name. If *name* already exists, the new dataset will be considered a descendant of the old *name*. (Thus, you can continually create new datasets with the same name, and later view the history of that dataset to see how it has changed over time.) The original data will also be available as a top-level file named "data" in the VFS, accessible via `bup fuse`, `bup ftp`, etc. -t, \--tree : output the git tree id of the resulting dataset. -c, \--commit : output the git commit id of the resulting dataset. -b, \--blobs : output a series of git blob ids that correspond to the chunks in the dataset. Incompatible with -n, -t, and -c. \--noop : read the data and split it into blocks based on the "bupsplit" rolling checksum algorithm, but don't do anything with the blocks. This is mostly useful for benchmarking. Incompatible with -n, -t, -c, and -b. \--copy : like `--noop`, but also write the data to stdout. This can be useful for benchmarking the speed of read+bupsplit+write for large amounts of data. Incompatible with -n, -t, -c, and -b. # OPTIONS -r, \--remote=*host*:*path* : save the backup set to the given remote server. If *path* is omitted, uses the default path on the remote server (you still need to include the ':'). The connection to the remote server is made with SSH. If you'd like to specify which port, user or private key to use for the SSH connection, we recommend you use the `~/.ssh/config` file. Even though the destination is remote, a local bup repository is still required. -d, \--date=*seconds-since-epoch* : specify the date inscribed in the commit (seconds since 1970-01-01). -q, \--quiet : disable progress messages. -v, \--verbose : increase verbosity (can be used more than once). \--git-ids : stdin is a list of git object ids instead of raw data. `bup split` will read the contents of each named git object (if it exists in the bup repository) and split it. This might be useful for converting a git repository with large binary files to use bup-style hashsplitting instead. This option is probably most useful when combined with `--keep-boundaries`. \--keep-boundaries : if multiple filenames are given on the command line, they are normally concatenated together as if the content all came from a single file. That is, the set of blobs/trees produced is identical to what it would have been if there had been a single input file. However, if you use `--keep-boundaries`, each file is split separately. You still only get a single tree or commit or series of blobs, but each blob comes from only one of the files; the end of one of the input files always ends a blob. \--bench : print benchmark timings to stderr. \--max-pack-size=*bytes* : never create git packfiles larger than the given number of bytes. Default is 1 billion bytes. Usually there is no reason to change this. \--max-pack-objects=*numobjs* : never create git packfiles with more than the given number of objects. Default is 200 thousand objects. Usually there is no reason to change this. \--fanout=*numobjs* : when splitting very large files, try and keep the number of elements in trees to an average of *numobjs*. \--bwlimit=*bytes/sec* : don't transmit more than *bytes/sec* bytes per second to the server. This is good for making your backups not suck up all your network bandwidth. Use a suffix like k, M, or G to specify multiples of 1024, 1024*1024, 1024*1024*1024 respectively. -*#*, \--compress=*#* : set the compression level to # (a value from 0-9, where 9 is the highest and 0 is no compression). The default is 1 (fast, loose compression) # EXAMPLES $ tar -cf - /etc | bup split -r myserver: -n mybackup-tar tar: Removing leading /' from member names Indexing objects: 100% (196/196), done. $ bup join -r myserver: mybackup-tar | tar -tf - | wc -l 1961 # SEE ALSO `bup-join`(1), `bup-index`(1), `bup-save`(1), `bup-on`(1), `ssh_config`(5) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-tag.md000066400000000000000000000027401303127641400167300ustar00rootroot00000000000000% bup-tag(1) Bup %BUP_VERSION% % Gabriel Filion % %BUP_DATE% # NAME bup-tag - tag a commit in the bup repository # SYNOPSIS bup tag bup tag [-f] \ \ bup tag -d [-f] \ # DESCRIPTION `bup tag` lists, creates or deletes a tag in the bup repository. A tag is an easy way to retrieve a specific commit. It can be used to mark a specific backup for easier retrieval later. When called without any arguments, the command lists all tags that can be found in the repository. When called with a tag name and a commit ID or ref name, it creates a new tag with the given name, if it doesn't already exist, that points to the commit given in the second argument. When called with '-d' and a tag name, it removes the given tag, if it exists. bup exposes the contents of backups with current tags, via any command that lists or shows backups. They can be found under the /.tag directory. For example, the 'ftp' command will show the tag named 'tag1' under /.tag/tag1. # OPTIONS -d, \--delete : delete a tag -f, \--force : Overwrite the named tag even if it already exists. With -f, don't report a missing tag as an error. # EXAMPLES $ bup tag new-puppet-version hostx-backup $ bup tag new-puppet-version $ bup ftp "ls /.tag/new-puppet-version" files.. $ bup tag -d new-puppet-version # SEE ALSO `bup-save`(1), `bup-split`(1), `bup-ftp`(1), `bup-fuse`(1), `bup-web`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-tick.md000066400000000000000000000012451303127641400171060ustar00rootroot00000000000000% bup-tick(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup-tick - wait for up to one second # SYNOPSIS bup tick # DESCRIPTION `bup tick` waits until `time`(2) returns a different value than it originally did. Since time() has a granularity of one second, this can cause a delay of up to one second. This program is useful for writing tests that need to ensure a file date will be seen as modified. It is slightly better than `sleep`(1) since it sometimes waits for less than one second. # EXAMPLES $ date; bup tick; date Sat Feb 6 16:59:58 EST 2010 Sat Feb 6 16:59:59 EST 2010 # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup-web.md000066400000000000000000000031521303127641400167300ustar00rootroot00000000000000% bup-ftp(1) Bup %BUP_VERSION% % Joe Beda % %BUP_DATE% # NAME bup-web - Start web server to browse bup repositiory # SYNOPSIS bup web [[hostname]:port] bup web unix://path # DESCRIPTION `bup web` starts a web server that can browse bup repositories. The file hierarchy is the same as that shown by `bup-fuse`(1), `bup-ls`(1) and `bup-ftp`(1). `hostname` and `port` default to 127.0.0.1 and 8080, respectively, and hence `bup web` will only offer up the web server to locally running clients. If you'd like to expose the web server to anyone on your network (dangerous!) you can omit the bind address to bind to all available interfaces: `:8080`. When `unix://path` is specified, the server will listen on the filesystem socket at `path` rather than a network socket. A `SIGTERM` signal may be sent to the server to request an orderly shutdown. # OPTIONS --human-readable : display human readable file sizes (i.e. 3.9K, 4.7M) --browser : open the site in the default browser # EXAMPLES $ bup web Serving HTTP on 127.0.0.1:8080... ^C Interrupted. $ bup web :8080 Serving HTTP on 0.0.0.0:8080... ^C Interrupted. $ bup web unix://socket & Serving HTTP on filesystem socket 'socket' $ curl --unix-socket ./socket http://localhost/ $ fg bup web unix://socket ^C Interrupted. $ bup web & [1] 30980 Serving HTTP on 127.0.0.1:8080... $ kill -s TERM 30980 Shutdown requested $ wait 30980 $ echo $? 0 # SEE ALSO `bup-fuse`(1), `bup-ls`(1), `bup-ftp`(1), `bup-restore`(1), `kill`(1) # BUP Part of the `bup`(1) suite. bup-0.29/Documentation/bup.md000066400000000000000000000047321303127641400161620ustar00rootroot00000000000000% bup(1) Bup %BUP_VERSION% % Avery Pennarun % %BUP_DATE% # NAME bup - Backup program using rolling checksums and git file formats # SYNOPSIS bup [global options...] \ [options...] # DESCRIPTION `bup` is a program for making backups of your files using the git file format. Unlike `git`(1) itself, bup is optimized for handling huge data sets including individual very large files (such a virtual machine images). However, once a backup set is created, it can still be accessed using git tools. The individual bup subcommands appear in their own man pages. # GLOBAL OPTIONS \--version : print bup's version number. Equivalent to `bup-version`(1) -d, \--bup-dir=*BUP_DIR* : use the given BUP_DIR parameter as the bup repository location, instead of reading it from the $BUP_DIR environment variable or using the default `~/.bup` location. # COMMONLY USED SUBCOMMANDS `bup-fsck`(1) : Check backup sets for damage and add redundancy information `bup-ftp`(1) : Browse backup sets using an ftp-like client `bup-fuse`(1) : Mount your backup sets as a filesystem `bup-help`(1) : Print detailed help for the given command `bup-index`(1) : Create or display the index of files to back up `bup-on`(1) : Backup a remote machine to the local one `bup-restore`(1) : Extract files from a backup set `bup-save`(1) : Save files into a backup set (note: run "bup index" first) `bup-web`(1) : Launch a web server to examine backup sets # RARELY USED SUBCOMMANDS `bup-damage`(1) : Deliberately destroy data `bup-drecurse`(1) : Recursively list files in your filesystem `bup-init`(1) : Initialize a bup repository `bup-join`(1) : Retrieve a file backed up using `bup-split`(1) `bup-ls`(1) : Browse the files in your backup sets `bup-margin`(1) : Determine how close your bup repository is to armageddon `bup-memtest`(1) : Test bup memory usage statistics `bup-midx`(1) : Index objects to speed up future backups `bup-newliner`(1) : Make sure progress messages don't overlap with output `bup-random`(1) : Generate a stream of random output `bup-server`(1) : The server side of the bup client-server relationship `bup-split`(1) : Split a single file into its own backup set `bup-tick`(1) : Wait for up to one second. `bup-version`(1) : Report the version number of your copy of bup. # SEE ALSO `git`(1) and the *README* file from the bup distribution. The home of bup is at . bup-0.29/HACKING000066400000000000000000000103571303127641400132300ustar00rootroot00000000000000 Conventions? Are you kidding? OK fine. Code Branching Model ==================== The master branch is what we consider the main-line of development, and the last, non-rc tag on master is the most recent stable release. Any branch with a "tmp/" prefix might be rebased (often), so keep that in mind when using or depending on one. Any branch with a "tmp/review/" prefix corresponds to a patchset submitted to the mailing list. We try to maintain these branches to make the review process easier for those not as familiar with patches via email. Current Trajectory ================== Now that we've finished the 0.29 release, we're working on 0.30, and although we're not certain which new features will be included, here are likely candidates: - Support for transferring saves between repositories and rewriting branches. and these are also under consideration: - Better VFS performance for large repositories (i.e. fuse, ls, web...). - Incremental indexing via inotify. - Smarter (and quieter) handling of cross-filesystem metadata. - Support for more general purpose push/pull of branches, saves, and tags between repositories. (See the bup-get patch series.) If you have the time and inclination, please help review patches posted to the list, or post your own. (See "ways to help" below.) More specific ways to help ========================== Testing -- yes please. With respect to patches, bup development is handled via the mailing list, and all patches should be sent to the list for review (see "Submitting Patches" below). In most cases, we try to wait until we have at least one or two "Reviewed-by:" replies to a patch posted to the list before incorporating it into master, so reviews are an important way to help. We also love a good "Tested-by:" -- the more the merrier. Testing ======= You can run the test suite much more quickly via "make -j test" (as compared to "make test"), at the expense of slightly more confusing output (interleaved parallel test output), and inaccurate intermediate success/failure counts, but the final counts displayed should be correct. Individual non-Python tests can be run via "./wvtest run t/TEST" and if you'd like to see all of the test output, you can omit the wvtest run wrapper: "t/TEST" Individual Python tests can be run via "./wvtest run ./wvtest.py lib/bup/t/TEST", and as above, you can see all the output by omitting the wvtest run wrapper like this: "./wvtest.py lib/bup/t/TEST" Submitting patches ================== As mentioned, all patches should be posted to the mailing list for review, and must be "signed off" by the author before official inclusion (see ./SIGNED-OFF-BY). You can create a "signed off" set of patches in ./patches, ready for submission to the list, like this: git format-patch -s -o patches origin/master which will include all of the patches since origin/master on your current branch. Then you can send them to the list like this: git send-email --to bup-list@googlegroups.com --compose patches/* The use of --compose will cause git to ask you to edit a cover letter that will be sent as the first message. It's also possible to handle everything in one step: git send-email -s --to bup-list@googlegroups.com --compose origin/master and you can add --annotate if you'd like to review or edit each patch before it's sent. For single patches, this might be easier: git send-email -s --to bup-list@googlegroups.com --annotate -n1 HEAD which will send the top patch on the current branch, and will stop to allow you to add comments. You can add comments to the section with the diffstat without affecting the commit message. Of course, unless your machine is set up to handle outgoing mail locally, you may need to configure git to be able to send mail. See git-send-email(1) for further details. Oh, and we do have a ./CODING-STYLE, hobgoblins and all, though don't let that scare you off. We're not all that fierce. Even More Generally =================== It's not like we have a lot of hard and fast rules, but some of the ideas here aren't altogether terrible: http://www.kernel.org/doc/Documentation/SubmittingPatches In particular, we've been paying at least some attention to the bits regarding Acked-by:, Reported-by:, Tested-by: and Reviewed-by:. bup-0.29/LICENSE000066400000000000000000000624141303127641400132470ustar00rootroot00000000000000 Unless otherwise stated below, the files in this project may be distributed under the terms of the following license. (The LGPL version 2.) In addition, bupsplit.c, bupsplit.h, and options.py may be redistributed according to the separate (BSD-style) license written inside those files. The definition of the relpath function was taken from CPython (tag v2.6, file Lib/posixpath.py, hg-commit 95fff5a6a276) and is covered under the terms of the PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2. GNU LIBRARY GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1991 Free Software Foundation, Inc. 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. [This is the first released version of the library GPL. It is numbered 2 because it goes with version 2 of the ordinary GPL.] Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This license, the Library General Public License, applies to some specially designated Free Software Foundation software, and to any other libraries whose authors decide to use it. You can use it for your libraries, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library, or if you modify it. For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link a program with the library, you must provide complete object files to the recipients so that they can relink them with the library, after making changes to the library and recompiling it. And you must show them these terms so they know their rights. Our method of protecting your rights has two steps: (1) copyright the library, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the library. Also, for each distributor's protection, we want to make certain that everyone understands that there is no warranty for this free library. If the library is modified by someone else and passed on, we want its recipients to know that what they have is not the original version, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that companies distributing free software will individually obtain patent licenses, thus in effect transforming the program into proprietary software. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. Most GNU software, including some libraries, is covered by the ordinary GNU General Public License, which was designed for utility programs. This license, the GNU Library General Public License, applies to certain designated libraries. This license is quite different from the ordinary one; be sure to read it in full, and don't assume that anything in it is the same as in the ordinary license. The reason we have a separate public license for some libraries is that they blur the distinction we usually make between modifying or adding to a program and simply using it. Linking a program with a library, without changing the library, is in some sense simply using the library, and is analogous to running a utility program or application program. However, in a textual and legal sense, the linked executable is a combined work, a derivative of the original library, and the ordinary General Public License treats it as such. Because of this blurred distinction, using the ordinary General Public License for libraries did not effectively promote software sharing, because most developers did not use the libraries. We concluded that weaker conditions might promote sharing better. However, unrestricted linking of non-free programs would deprive the users of those programs of all benefit from the free status of the libraries themselves. This Library General Public License is intended to permit developers of non-free programs to use free libraries, while preserving your freedom as a user of such programs to change the free libraries that are incorporated in them. (We have not seen how to achieve this as regards changes in header files, but we have achieved it as regards changes in the actual functions of the Library.) The hope is that this will lead to faster development of free libraries. The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The former contains code derived from the library, while the latter only works together with the library. Note that it is possible for a library to be covered by the ordinary General Public License rather than by this special one. GNU LIBRARY GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any software library which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Library General Public License (also called "this License"). Each licensee is addressed as "you". A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables. The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".) "Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library. Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does. 1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) The modified work must itself be a software library. b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change. c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License. d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful. (For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library. In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices. Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy. This option is useful when you wish to copy part of the code of the Library into a program that is not a library. 4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange. If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code. 5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License. However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License. Section 6 states terms for distribution of such executables. When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law. If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.) Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself. 6. As an exception to the Sections above, you may also compile or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications. You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things: a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.) b) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution. c) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place. d) Verify that the user has already received a copy of these materials or that you have already sent this user a copy. For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you distribute. 7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above. b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it. 10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 13. The Free Software Foundation may publish revised and/or new versions of the Library General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation. 14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Libraries If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License). To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This library is free software; you can redistribute it and/or modify it under the terms of the GNU Library General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Library General Public License for more details. You should have received a copy of the GNU Library General Public License along with this library; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Also add information on how to contact you by electronic and paper mail. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the library, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the library `Frob' (a library for tweaking knobs) written by James Random Hacker. , 1 April 1990 Ty Coon, President of Vice That's all there is to it! bup-0.29/Makefile000066400000000000000000000214171303127641400137000ustar00rootroot00000000000000 SHELL := bash .DEFAULT_GOAL := all # See config/config.vars.in (sets bup_python, among other things) include config/config.vars pf := set -o pipefail define isok && echo " ok" || echo " no" endef # If ok, strip trailing " ok" and return the output, otherwise, error define shout $(if $(subst ok,,$(lastword $(1))),$(error $(2)),$(shell x="$(1)"; echo $${x%???})) endef sampledata_rev := $(shell t/configure-sampledata --revision $(isok)) sampledata_rev := \ $(call shout,$(sampledata_rev),Could not parse sampledata revision) current_sampledata := t/sampledata/var/rev/v$(sampledata_rev) os := $(shell ($(pf); uname | sed 's/[-_].*//') $(isok)) os := $(call shout,$(os),Unable to determine OS) CFLAGS := -Wall -O2 -Werror -Wno-unknown-pragmas $(PYINCLUDE) $(CFLAGS) CFLAGS := -D_FILE_OFFSET_BITS=64 $(CFLAGS) SOEXT:=.so ifeq ($(os),CYGWIN) SOEXT:=.dll endif ifdef TMPDIR test_tmp := $(TMPDIR) else test_tmp := $(CURDIR)/t/tmp endif initial_setup := $(shell ./configure-version --update $(isok)) initial_setup := $(call shout,$(initial_setup),Version configuration failed)) config/config.vars: configure config/configure config/configure.inc \ $(wildcard config/*.in) MAKE="$(MAKE)" ./configure bup_cmds := cmd/bup-python\ $(patsubst cmd/%-cmd.py,cmd/bup-%,$(wildcard cmd/*-cmd.py)) \ $(patsubst cmd/%-cmd.sh,cmd/bup-%,$(wildcard cmd/*-cmd.sh)) bup_deps := bup lib/bup/_checkout.py lib/bup/_helpers$(SOEXT) $(bup_cmds) all: $(bup_deps) Documentation/all $(current_sampledata) bup: ln -s main.py bup $(current_sampledata): t/configure-sampledata --setup define install-python-bin set -e; \ sed -e '1 s|.*|#!$(bup_python)|; 2,/^# end of bup preamble$$/d' $1 > $2; \ chmod 0755 $2; endef PANDOC ?= $(shell type -p pandoc) ifeq (,$(PANDOC)) $(shell echo "Warning: pandoc not found; skipping manpage generation" 1>&2) man_md := else man_md := $(wildcard Documentation/*.md) endif man_roff := $(patsubst %.md,%.1,$(man_md)) man_html := $(patsubst %.md,%.html,$(man_md)) INSTALL=install PREFIX=/usr/local MANDIR=$(PREFIX)/share/man DOCDIR=$(PREFIX)/share/doc/bup BINDIR=$(PREFIX)/bin LIBDIR=$(PREFIX)/lib/bup dest_mandir := $(DESTDIR)$(MANDIR) dest_docdir := $(DESTDIR)$(DOCDIR) dest_bindir := $(DESTDIR)$(BINDIR) dest_libdir := $(DESTDIR)$(LIBDIR) install: all $(INSTALL) -d $(dest_bindir) \ $(dest_libdir)/bup $(dest_libdir)/cmd \ $(dest_libdir)/web $(dest_libdir)/web/static test -z "$(man_roff)" || install -d $(dest_mandir)/man1 test -z "$(man_roff)" || $(INSTALL) -m 0644 $(man_roff) $(dest_mandir)/man1 test -z "$(man_html)" || install -d $(dest_docdir) test -z "$(man_html)" || $(INSTALL) -m 0644 $(man_html) $(dest_docdir) $(call install-python-bin,bup,"$(dest_bindir)/bup") set -e; \ for cmd in $$(ls cmd/bup-* | grep -v cmd/bup-python); do \ $(call install-python-bin,"$$cmd","$(dest_libdir)/$$cmd") \ done $(INSTALL) -pm 0644 \ lib/bup/*.py \ $(dest_libdir)/bup $(INSTALL) -pm 0755 \ lib/bup/*$(SOEXT) \ $(dest_libdir)/bup $(INSTALL) -pm 0644 \ lib/web/static/* \ $(dest_libdir)/web/static/ $(INSTALL) -pm 0644 \ lib/web/*.html \ $(dest_libdir)/web/ config/config.h: config/config.vars lib/bup/_helpers$(SOEXT): \ config/config.h \ lib/bup/bupsplit.c lib/bup/_helpers.c lib/bup/csetup.py @rm -f $@ cd lib/bup && \ LDFLAGS="$(LDFLAGS)" CFLAGS="$(CFLAGS)" "$(bup_python)" csetup.py build cp lib/bup/build/*/_helpers$(SOEXT) lib/bup/ lib/bup/_checkout.py: @if grep -F '$Format' lib/bup/_release.py \ && ! test -e lib/bup/_checkout.py; then \ echo "Something has gone wrong; $@ should already exist."; \ echo 'Check "./configure-version --update"'; \ false; \ fi t/tmp: mkdir t/tmp runtests: runtests-python runtests-cmdline # The "pwd -P" here may not be appropriate in the long run, but we # need it until we settle the relevant drecurse/exclusion questions: # https://groups.google.com/forum/#!topic/bup-list/9ke-Mbp10Q0 runtests-python: all t/tmp $(pf); cd $$(pwd -P); TMPDIR="$(test_tmp)" \ "$(bup_python)" wvtest.py t/t*.py lib/*/t/t*.py 2>&1 \ | tee -a t/tmp/test-log/$$$$.log cmdline_tests := \ t/test-prune-older \ t/test-web.sh \ t/test-rm.sh \ t/test-gc.sh \ t/test-main.sh \ t/test-list-idx.sh \ t/test-index.sh \ t/test-split-join.sh \ t/test-fuse.sh \ t/test-drecurse.sh \ t/test-cat-file.sh \ t/test-compression.sh \ t/test-fsck.sh \ t/test-index-clear.sh \ t/test-index-check-device.sh \ t/test-ls.sh \ t/test-tz.sh \ t/test-meta.sh \ t/test-on.sh \ t/test-restore-map-owner.sh \ t/test-restore-single-file.sh \ t/test-rm-between-index-and-save.sh \ t/test-save-with-valid-parent.sh \ t/test-sparse-files.sh \ t/test-command-without-init-fails.sh \ t/test-redundant-saves.sh \ t/test-save-creates-no-unrefs.sh \ t/test-save-restore-excludes.sh \ t/test-save-strip-graft.sh \ t/test-import-duplicity.sh \ t/test-import-rdiff-backup.sh \ t/test-xdev.sh \ t/test.sh # For parallel runs. # The "pwd -P" here may not be appropriate in the long run, but we # need it until we settle the relevant drecurse/exclusion questions: # https://groups.google.com/forum/#!topic/bup-list/9ke-Mbp10Q0 tmp-target-run-test%: all t/tmp $(pf); cd $$(pwd -P); TMPDIR="$(test_tmp)" \ t/test$* 2>&1 | tee -a t/tmp/test-log/$$$$.log runtests-cmdline: $(subst t/test,tmp-target-run-test,$(cmdline_tests)) stupid: PATH=/bin:/usr/bin $(MAKE) test test: all if test -e t/tmp/test-log; then rm -r t/tmp/test-log; fi mkdir -p t/tmp/test-log ./wvtest watch --no-counts \ $(MAKE) runtests-python runtests-cmdline 2>t/tmp/test-log/$$$$.log ./wvtest report t/tmp/test-log/*.log check: test distcheck: all ./wvtest run t/test-release-archive.sh cmd/python-cmd.sh: config/config.vars Makefile printf "#!/bin/sh\nexec %q \"\$$@\"" "$(bup_python)" \ >> cmd/python-cmd.sh.$$PPID.tmp chmod +x cmd/python-cmd.sh.$$PPID.tmp mv cmd/python-cmd.sh.$$PPID.tmp cmd/python-cmd.sh cmd/bup-%: cmd/%-cmd.py rm -f $@ ln -s $*-cmd.py $@ cmd/bup-%: cmd/%-cmd.sh rm -f $@ ln -s $*-cmd.sh $@ .PHONY: Documentation/all Documentation/all: $(man_roff) $(man_html) Documentation/substvars: $(bup_deps) echo "s,%BUP_VERSION%,$$(./bup version --tag),g" > $@ echo "s,%BUP_DATE%,$$(./bup version --date),g" >> $@ Documentation/%.1: Documentation/%.md Documentation/substvars $(pf); sed -f Documentation/substvars $< \ | $(PANDOC) -s -r markdown -w man -o $@ Documentation/%.html: Documentation/%.md Documentation/substvars $(pf); sed -f Documentation/substvars $< \ | $(PANDOC) -s -r markdown -w html -o $@ .PHONY: Documentation/clean Documentation/clean: cd Documentation && rm -f *~ .*~ *.[0-9] *.html substvars # update the local 'man' and 'html' branches with pregenerated output files, for # people who don't have pandoc (and maybe to aid in google searches or something) export-docs: Documentation/all git update-ref refs/heads/man origin/man '' 2>/dev/null || true git update-ref refs/heads/html origin/html '' 2>/dev/null || true set -eo pipefail; \ GIT_INDEX_FILE=gitindex.tmp; export GIT_INDEX_FILE; \ rm -f $${GIT_INDEX_FILE} && \ git add -f Documentation/*.1 && \ git update-ref refs/heads/man \ $$(echo "Autogenerated man pages for $$(git describe --always)" \ | git commit-tree $$(git write-tree --prefix=Documentation) \ -p refs/heads/man) && \ rm -f $${GIT_INDEX_FILE} && \ git add -f Documentation/*.html && \ git update-ref refs/heads/html \ $$(echo "Autogenerated html pages for $$(git describe --always)" \ | git commit-tree $$(git write-tree --prefix=Documentation) \ -p refs/heads/html) # push the pregenerated doc files to origin/man and origin/html push-docs: export-docs git push origin man html # import pregenerated doc files from origin/man and origin/html, in case you # don't have pandoc but still want to be able to install the docs. import-docs: Documentation/clean $(pf); git archive origin/html | (cd Documentation && tar -xvf -) $(pf); git archive origin/man | (cd Documentation && tar -xvf -) clean: Documentation/clean cmd/bup-python cd config && rm -f *~ .*~ \ ${CONFIGURE_DETRITUS} ${CONFIGURE_FILES} ${GENERATED_FILES} rm -f *.o lib/*/*.o *.so lib/*/*.so *.dll lib/*/*.dll *.exe \ .*~ *~ */*~ lib/*/*~ lib/*/*/*~ \ *.pyc */*.pyc lib/*/*.pyc lib/*/*/*.pyc \ bup bup-* \ randomgen memtest \ testfs.img lib/bup/t/testfs.img if test -e t/mnt; then t/cleanup-mounts-under t/mnt; fi if test -e t/mnt; then rm -r t/mnt; fi if test -e t/tmp; then t/cleanup-mounts-under t/tmp; fi # FIXME: migrate these to t/mnt/ if test -e lib/bup/t/testfs; \ then umount lib/bup/t/testfs || true; fi rm -rf *.tmp *.tmp.meta t/*.tmp lib/*/*/*.tmp build lib/bup/build lib/bup/t/testfs if test -e t/tmp; then t/force-delete t/tmp; fi ./configure-version --clean t/configure-sampledata --clean # Remove last so that cleanup tools can depend on it rm -f cmd/bup-* cmd/python-cmd.sh bup-0.29/README000077700000000000000000000000001303127641400143652README.mdustar00rootroot00000000000000bup-0.29/README.md000066400000000000000000000502201303127641400135110ustar00rootroot00000000000000bup: It backs things up ======================= bup is a program that backs things up. It's short for "backup." Can you believe that nobody else has named an open source program "bup" after all this time? Me neither. Despite its unassuming name, bup is pretty cool. To give you an idea of just how cool it is, I wrote you this poem: Bup is teh awesome What rhymes with awesome? I guess maybe possum But that's irrelevant. Hmm. Did that help? Maybe prose is more useful after all. Reasons bup is awesome ---------------------- bup has a few advantages over other backup software: - It uses a rolling checksum algorithm (similar to rsync) to split large files into chunks. The most useful result of this is you can backup huge virtual machine (VM) disk images, databases, and XML files incrementally, even though they're typically all in one huge file, and not use tons of disk space for multiple versions. - It uses the packfile format from git (the open source version control system), so you can access the stored data even if you don't like bup's user interface. - Unlike git, it writes packfiles *directly* (instead of having a separate garbage collection / repacking stage) so it's fast even with gratuitously huge amounts of data. bup's improved index formats also allow you to track far more filenames than git (millions) and keep track of far more objects (hundreds or thousands of gigabytes). - Data is "automagically" shared between incremental backups without having to know which backup is based on which other one - even if the backups are made from two different computers that don't even know about each other. You just tell bup to back stuff up, and it saves only the minimum amount of data needed. - You can back up directly to a remote bup server, without needing tons of temporary disk space on the computer being backed up. And if your backup is interrupted halfway through, the next run will pick up where you left off. And it's easy to set up a bup server: just install bup on any machine where you have ssh access. - Bup can use "par2" redundancy to recover corrupted backups even if your disk has undetected bad sectors. - Even when a backup is incremental, you don't have to worry about restoring the full backup, then each of the incrementals in turn; an incremental backup *acts* as if it's a full backup, it just takes less disk space. - You can mount your bup repository as a FUSE filesystem and access the content that way, and even export it over Samba. - It's written in python (with some C parts to make it faster) so it's easy for you to extend and maintain. Reasons you might want to avoid bup ----------------------------------- - This is a very early version. Therefore it will most probably not work for you, but we don't know why. It is also missing some probably-critical features. - It requires python >= 2.6, a C compiler, and an installed git version >= 1.5.3.1. It also requires par2 if you want fsck to be able to generate the information needed to recover from some types of corruption. - It currently only works on Linux, FreeBSD, NetBSD, OS X >= 10.4, Solaris, or Windows (with Cygwin). Patches to support other platforms are welcome. - Any items in "Things that are stupid" below. Notable changes introduced by a release ======================================= - Changes in 0.29 as compared to 0.28.1 - Changes in 0.28.1 as compared to 0.28 - Changes in 0.28 as compared to 0.27.1 - Changes in 0.27.1 as compared to 0.27 Getting started =============== From source ----------- - Check out the bup source code using git: git clone https://github.com/bup/bup - Install the required python libraries (including the development libraries). On very recent Debian/Ubuntu versions, this may be sufficient (run as root): apt-get build-dep bup Otherwise try this (substitute python2.6-dev if you have an older system): apt-get install python2.7-dev python-fuse apt-get install python-pyxattr python-pylibacl apt-get install linux-libc-dev apt-get install acl attr apt-get install python-tornado # optional On CentOS (for CentOS 6, at least), this should be sufficient (run as root): yum groupinstall "Development Tools" yum install python python-devel yum install fuse-python pyxattr pylibacl yum install perl-Time-HiRes In addition to the default CentOS repositories, you may need to add RPMForge (for fuse-python) and EPEL (for pyxattr and pylibacl). On Cygwin, install python, make, rsync, and gcc4. If you would like to use the optional bup web server on systems without a tornado package, you may want to try this: pip install tornado - Build the python module and symlinks: make - Run the tests: make test The tests should pass. If they don't pass for you, stop here and send an email to bup-list@googlegroups.com. Though if there are symbolic links along the current working directory path, the tests may fail. Running something like this before "make test" should sidestep the problem: cd "$(/bin/pwd)" - You can install bup via "make install", and override the default destination with DESTDIR and PREFIX. Files are normally installed to "$DESTDIR/$PREFIX" where DESTDIR is empty by default, and PREFIX is set to /usr/local. So if you wanted to install bup to /opt/bup, you might do something like this: make install DESTDIR=/opt/bup PREFIX='' - The Python executable that bup will use is chosen by ./configure, which will search for a reasonable version unless PYTHON is set in the environment, in which case, bup will use that path. You can see which Python executable was chosen by looking at the configure output, or examining cmd/python-cmd.sh, and you can change the selection by re-running ./configure. From binary packages -------------------- Binary packages of bup are known to be built for the following OSes: - Debian: http://packages.debian.org/search?searchon=names&keywords=bup - Ubuntu: http://packages.ubuntu.com/search?searchon=names&keywords=bup - pkgsrc (NetBSD, Dragonfly, and others) http://pkgsrc.se/sysutils/bup http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/sysutils/bup/ - Arch Linux: https://www.archlinux.org/packages/?sort=&q=bup - Fedora: https://apps.fedoraproject.org/packages/bup Using bup --------- - Get help for any bup command: bup help bup help init bup help index bup help save bup help restore ... - Initialize the default BUP_DIR (~/.bup): bup init - Make a local backup (-v or -vv will increase the verbosity): bup index /etc bup save -n local-etc /etc - Restore a local backup to ./dest: bup restore -C ./dest local-etc/latest/etc ls -l dest/etc - Look at how much disk space your backup took: du -s ~/.bup - Make another backup (which should be mostly identical to the last one; notice that you don't have to *specify* that this backup is incremental, it just saves space automatically): bup index /etc bup save -n local-etc /etc - Look how little extra space your second backup used (on top of the first): du -s ~/.bup - Get a list of your previous backups: bup ls local-etc - Restore your first backup again: bup restore -C ./dest-2 local-etc/2013-11-23-11195/etc - Make a backup to a remote server which must already have the 'bup' command somewhere in its PATH (see /etc/profile, etc/environment, ~/.profile, or ~/.bashrc), and be accessible via ssh. Make sure to replace SERVERNAME with the actual hostname of your server: bup init -r SERVERNAME:path/to/remote-bup-dir bup index /etc bup save -r SERVERNAME:path/to/remote-bup-dir -n local-etc /etc - Restore a backup from a remote server. (FAIL: unfortunately, unlike "bup join", "bup restore" does not yet support remote restores. See both "bup join" and "Things that are stupid" below.) - Defend your backups from death rays (OK fine, more likely from the occasional bad disk block). This writes parity information (currently via par2) for all of the existing data so that bup may be able to recover from some amount of repository corruption: bup fsck -g - Use split/join instead of index/save/restore. Try making a local backup using tar: tar -cvf - /etc | bup split -n local-etc -vv - Try restoring the tarball: bup join local-etc | tar -tf - - Look at how much disk space your backup took: du -s ~/.bup - Make another tar backup: tar -cvf - /etc | bup split -n local-etc -vv - Look at how little extra space your second backup used on top of the first: du -s ~/.bup - Restore the first tar backup again (the ~1 is git notation for "one older than the most recent"): bup join local-etc~1 | tar -tf - - Get a list of your previous split-based backups: GIT_DIR=~/.bup git log local-etc - Make a backup on a remote server: tar -cvf - /etc | bup split -r SERVERNAME: -n local-etc -vv - Try restoring the remote backup tarball: bup join -r SERVERNAME: local-etc | tar -tf - That's all there is to it! Notes on FreeBSD ---------------- - FreeBSD's default 'make' command doesn't like bup's Makefile. In order to compile the code, run tests and install bup, you need to install GNU Make from the port named 'gmake' and use its executable instead in the commands seen above. (i.e. 'gmake test' runs bup's test suite) - Python's development headers are automatically installed with the 'python' port so there's no need to install them separately. - To use the 'bup fuse' command, you need to install the fuse kernel module from the 'fusefs-kmod' port in the 'sysutils' section and the libraries from the port named 'py-fusefs' in the 'devel' section. - The 'par2' command can be found in the port named 'par2cmdline'. - In order to compile the documentation, you need pandoc which can be found in the port named 'hs-pandoc' in the 'textproc' section. Notes on NetBSD/pkgsrc ---------------------- - See pkgsrc/sysutils/bup, which should be the most recent stable release and includes man pages. It also has a reasonable set of dependencies (git, par2, py-fuse-bindings). - The "fuse-python" package referred to is hard to locate, and is a separate tarball for the python language binding distributed by the fuse project on sourceforge. It is available as pkgsrc/filesystems/py-fuse-bindings and on NetBSD 5, "bup fuse" works with it. - "bup fuse" presents every directory/file as inode 0. The directory traversal code ("fts") in NetBSD's libc will interpret this as a cycle and error out, so "ls -R" and "find" will not work. - There is no support for ACLs. If/when some entrprising person fixes this, adjust t/compare-trees. Notes on Cygwin --------------- - There is no support for ACLs. If/when some enterprising person fixes this, adjust t/compare-trees. - In t/test.sh, two tests have been disabled. These tests check to see that repeated saves produce identical trees and that an intervening index doesn't change the SHA1. Apparently Cygwin has some unusual behaviors with respect to access times (that probably warrant further investigation). Possibly related: http://cygwin.com/ml/cygwin/2007-06/msg00436.html Notes on OS X ------------- - There is no support for ACLs. If/when some enterprising person fixes this, adjust t/compare-trees. How it works ============ Basic storage: -------------- bup stores its data in a git-formatted repository. Unfortunately, git itself doesn't actually behave very well for bup's use case (huge numbers of files, files with huge sizes, retaining file permissions/ownership are important), so we mostly don't use git's *code* except for a few helper programs. For example, bup has its own git packfile writer written in python. Basically, 'bup split' reads the data on stdin (or from files specified on the command line), breaks it into chunks using a rolling checksum (similar to rsync), and saves those chunks into a new git packfile. There is at least one git packfile per backup. When deciding whether to write a particular chunk into the new packfile, bup first checks all the other packfiles that exist to see if they already have that chunk. If they do, the chunk is skipped. git packs come in two parts: the pack itself (*.pack) and the index (*.idx). The index is pretty small, and contains a list of all the objects in the pack. Thus, when generating a remote backup, we don't have to have a copy of the packfiles from the remote server: the local end just downloads a copy of the server's *index* files, and compares objects against those when generating the new pack, which it sends directly to the server. The "-n" option to 'bup split' and 'bup save' is the name of the backup you want to create, but it's actually implemented as a git branch. So you can do cute things like checkout a particular branch using git, and receive a bunch of chunk files corresponding to the file you split. If you use '-b' or '-t' or '-c' instead of '-n', bup split will output a list of blobs, a tree containing that list of blobs, or a commit containing that tree, respectively, to stdout. You can use this to construct your own scripts that do something with those values. The bup index: -------------- 'bup index' walks through your filesystem and updates a file (whose name is, by default, ~/.bup/bupindex) to contain the name, attributes, and an optional git SHA1 (blob id) of each file and directory. 'bup save' basically just runs the equivalent of 'bup split' a whole bunch of times, once per file in the index, and assembles a git tree that contains all the resulting objects. Among other things, that makes 'git diff' much more useful (compared to splitting a tarball, which is essentially a big binary blob). However, since bup splits large files into smaller chunks, the resulting tree structure doesn't *exactly* correspond to what git itself would have stored. Also, the tree format used by 'bup save' will probably change in the future to support storing file ownership, more complex file permissions, and so on. If a file has previously been written by 'bup save', then its git blob/tree id is stored in the index. This lets 'bup save' avoid reading that file to produce future incremental backups, which means it can go *very* fast unless a lot of files have changed. Things that are stupid for now but which we'll fix later ======================================================== Help with any of these problems, or others, is very welcome. Join the mailing list (see below) if you'd like to help. - 'bup restore' can't pull directly from a remote server. So in one sense "save -r" is a dead-end right now. Obviously you can use "ssh SERVER bup restore -C ./dest..." to create a tree you can transfer elsewhere via rsync/tar/whatever, but that's *lame*. Until we fix it, you may be able to mount the remote BUP_DIR via sshfs and then restore "normally", though that hasn't been officially tested. - 'bup save' and 'bup restore' have immature metadata support. On the plus side, they actually do have support now, but it's new, and not remotely as well tested as tar/rsync/whatever's. However, you have to start somewhere, and as of 0.25, we think it's ready for more general use. Please let us know if you have any trouble. Also, if any strip or graft-style options are specified to 'bup save', then no metadata will be written for the root directory. That's obviously less than ideal. - bup is overly optimistic about mmap. Right now bup just assumes that it can mmap as large a block as it likes, and that mmap will never fail. Yeah, right... If nothing else, this has failed on 32-bit architectures (and 31-bit is even worse -- looking at you, s390). To fix this, we might just implement a FakeMmap[1] class that uses normal file IO and handles all of the mmap methods[2] that bup actually calls. Then we'd swap in one of those whenever mmap fails. This would also require implementing some of the methods needed to support "[]" array access, probably at a minimum __getitem__, __setitem__, and __setslice__ [3]. [1] http://comments.gmane.org/gmane.comp.sysutils.backup.bup/613 [2] http://docs.python.org/2/library/mmap.html [3] http://docs.python.org/2/reference/datamodel.html#emulating-container-types - 'bup index' is slower than it should be. It's still rather fast: it can iterate through all the filenames on my 600,000 file filesystem in a few seconds. But it still needs to rewrite the entire index file just to add a single filename, which is pretty nasty; it should just leave the new files in a second "extra index" file or something. - bup could use inotify for *really* efficient incremental backups. You could even have your system doing "continuous" backups: whenever a file changes, we immediately send an image of it to the server. We could give the continuous-backup process a really low CPU and I/O priority so you wouldn't even know it was running. - bup only has experimental support for pruning old backups. While you should now be able to drop old saves and branches with `bup rm`, and reclaim the space occupied by data that's no longer needed by other backups with `bup gc`, these commands are experimental, and should be handled with great care. See the man pages for more information. Unless you want to help test the new commands, one possible workaround is to just start a new BUP_DIR occasionally, i.e. bup-2013, bup-2014... - bup has never been tested on anything but Linux, FreeBSD, NetBSD, OS X, and Windows+Cygwin. There's nothing that makes it *inherently* non-portable, though, so that's mostly a matter of someone putting in some effort. (For a "native" Windows port, the most annoying thing is the absence of ssh in a default Windows installation.) - bup needs better documentation. According to a recent article about bup in Linux Weekly News (https://lwn.net/Articles/380983/), "it's a bit short on examples and a user guide would be nice." Documentation is the sort of thing that will never be great unless someone from outside contributes it (since the developers can never remember which parts are hard to understand). - bup is "relatively speedy" and has "pretty good" compression. ...according to the same LWN article. Clearly neither of those is good enough. We should have awe-inspiring speed and crazy-good compression. Must work on that. Writing more parts in C might help with the speed. - bup has no GUI. Actually, that's not stupid, but you might consider it a limitation. See the ["Related Projects"](https://bup.github.io/) list for some possible options. More Documentation ================== bup has an extensive set of man pages. Try using 'bup help' to get started, or use 'bup help SUBCOMMAND' for any bup subcommand (like split, join, index, save, etc.) to get details on that command. For further technical details, please see ./DESIGN. How you can help ================ bup is a work in progress and there are many ways it can still be improved. If you'd like to contribute patches, ideas, or bug reports, please join the bup mailing list. You can find the mailing list archives here: http://groups.google.com/group/bup-list and you can subscribe by sending a message to: bup-list+subscribe@googlegroups.com Please see ./HACKING for additional information, i.e. how to submit patches (hint - no pull requests), how we handle branches, etc. Have fun, Avery bup-0.29/SIGNED-OFF-BY000066400000000000000000000003761303127641400140550ustar00rootroot00000000000000 Patches to bup should have a Signed-off-by: header. If you include this header in your patches, this signifies that you are licensing your patch to be used under the same terms as the rest of bup, ie. the GNU Library General Public License, version 2. bup-0.29/buptest.py000066400000000000000000000024411303127641400142740ustar00rootroot00000000000000 from contextlib import contextmanager from os.path import basename, dirname, realpath from traceback import extract_stack import subprocess, sys, tempfile from wvtest import WVPASSEQ, wvfailure_count from bup import helpers @contextmanager def no_lingering_errors(): def fail_if_errors(): if helpers.saved_errors: bt = extract_stack() src_file, src_line, src_func, src_txt = bt[-4] msg = 'saved_errors ' + repr(helpers.saved_errors) print '! %-70s %s' % ('%s:%-4d %s' % (basename(src_file), src_line, msg), 'FAILED') sys.stdout.flush() fail_if_errors() helpers.clear_errors() yield fail_if_errors() helpers.clear_errors() # Assumes (of course) this file is at the top-level of the source tree _bup_tmp = realpath(dirname(__file__) + '/t/tmp') helpers.mkdirp(_bup_tmp) @contextmanager def test_tempdir(prefix): initial_failures = wvfailure_count() tmpdir = tempfile.mkdtemp(dir=_bup_tmp, prefix=prefix) yield tmpdir if wvfailure_count() == initial_failures: subprocess.call(['chmod', '-R', 'u+rwX', tmpdir]) subprocess.call(['rm', '-rf', tmpdir]) bup-0.29/cmd/000077500000000000000000000000001303127641400127765ustar00rootroot00000000000000bup-0.29/cmd/bloom-cmd.py000077500000000000000000000121771303127641400152340ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import glob, os, sys, tempfile from bup import options, git, bloom from bup.helpers import (add_error, debug1, handle_ctrl_c, log, progress, qprogress, saved_errors) optspec = """ bup bloom [options...] -- ruin ruin the specified bloom file (clearing the bitfield) f,force ignore existing bloom file and regenerate it from scratch o,output= output bloom filename (default: auto) d,dir= input directory to look for idx files (default: auto) k,hashes= number of hash functions to use (4 or 5) (default: auto) c,check= check the given .idx file against the bloom filter """ def ruin_bloom(bloomfilename): rbloomfilename = git.repo_rel(bloomfilename) if not os.path.exists(bloomfilename): log("%s\n" % bloomfilename) add_error("bloom: %s not found to ruin\n" % rbloomfilename) return b = bloom.ShaBloom(bloomfilename, readwrite=True, expected=1) b.map[16:16+2**b.bits] = '\0' * 2**b.bits def check_bloom(path, bloomfilename, idx): rbloomfilename = git.repo_rel(bloomfilename) ridx = git.repo_rel(idx) if not os.path.exists(bloomfilename): log("bloom: %s: does not exist.\n" % rbloomfilename) return b = bloom.ShaBloom(bloomfilename) if not b.valid(): add_error("bloom: %r is invalid.\n" % rbloomfilename) return base = os.path.basename(idx) if base not in b.idxnames: log("bloom: %s does not contain the idx.\n" % rbloomfilename) return if base == idx: idx = os.path.join(path, idx) log("bloom: bloom file: %s\n" % rbloomfilename) log("bloom: checking %s\n" % ridx) for objsha in git.open_idx(idx): if not b.exists(objsha): add_error("bloom: ERROR: object %s missing" % str(objsha).encode('hex')) _first = None def do_bloom(path, outfilename): global _first b = None if os.path.exists(outfilename) and not opt.force: b = bloom.ShaBloom(outfilename) if not b.valid(): debug1("bloom: Existing invalid bloom found, regenerating.\n") b = None add = [] rest = [] add_count = 0 rest_count = 0 for i,name in enumerate(glob.glob('%s/*.idx' % path)): progress('bloom: counting: %d\r' % i) ix = git.open_idx(name) ixbase = os.path.basename(name) if b and (ixbase in b.idxnames): rest.append(name) rest_count += len(ix) else: add.append(name) add_count += len(ix) total = add_count + rest_count if not add: debug1("bloom: nothing to do.\n") return if b: if len(b) != rest_count: debug1("bloom: size %d != idx total %d, regenerating\n" % (len(b), rest_count)) b = None elif (b.bits < bloom.MAX_BLOOM_BITS and b.pfalse_positive(add_count) > bloom.MAX_PFALSE_POSITIVE): debug1("bloom: regenerating: adding %d entries gives " "%.2f%% false positives.\n" % (add_count, b.pfalse_positive(add_count))) b = None else: b = bloom.ShaBloom(outfilename, readwrite=True, expected=add_count) if not b: # Need all idxs to build from scratch add += rest add_count += rest_count del rest del rest_count msg = b is None and 'creating from' or 'adding' if not _first: _first = path dirprefix = (_first != path) and git.repo_rel(path)+': ' or '' progress('bloom: %s%s %d file%s (%d object%s).\n' % (dirprefix, msg, len(add), len(add)!=1 and 's' or '', add_count, add_count!=1 and 's' or '')) tfname = None if b is None: tfname = os.path.join(path, 'bup.tmp.bloom') b = bloom.create(tfname, expected=add_count, k=opt.k) count = 0 icount = 0 for name in add: ix = git.open_idx(name) qprogress('bloom: writing %.2f%% (%d/%d objects)\r' % (icount*100.0/add_count, icount, add_count)) b.add_idx(ix) count += 1 icount += len(ix) # Currently, there's an open file object for tfname inside b. # Make sure it's closed before rename. b.close() if tfname: os.rename(tfname, outfilename) handle_ctrl_c() o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra: o.fatal('no positional parameters expected') git.check_repo_or_die() if not opt.check and opt.k and opt.k not in (4,5): o.fatal('only k values of 4 and 5 are supported') paths = opt.dir and [opt.dir] or git.all_packdirs() for path in paths: debug1('bloom: scanning %s\n' % path) outfilename = opt.output or os.path.join(path, 'bup.bloom') if opt.check: check_bloom(path, outfilename, opt.check) elif opt.ruin: ruin_bloom(outfilename) else: do_bloom(path, outfilename) if saved_errors: log('WARNING: %d errors encountered during bloom.\n' % len(saved_errors)) sys.exit(1) elif opt.check: log('All tests passed.\n') bup-0.29/cmd/cat-file-cmd.py000077500000000000000000000034471303127641400156100ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import re, stat, sys from bup import options, git, vfs from bup.helpers import chunkyreader, handle_ctrl_c, log, saved_errors optspec = """ bup cat-file [--meta|--bupm] /branch/revision/[path] -- meta print the target's metadata entry (decoded then reencoded) to stdout bupm print the target directory's .bupm file directly to stdout """ handle_ctrl_c() o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) git.check_repo_or_die() top = vfs.RefList(None) if not extra: o.fatal('must specify a target') if len(extra) > 1: o.fatal('only one target file allowed') if opt.bupm and opt.meta: o.fatal('--meta and --bupm are incompatible') target = extra[0] if not re.match(r'/*[^/]+/[^/]+', target): o.fatal("path %r doesn't include a branch and revision" % target) try: n = top.lresolve(target) except vfs.NodeError as e: o.fatal(e) if isinstance(n, vfs.FakeSymlink): # Source is actually /foo/what, i.e. a top-level commit # like /foo/latest, which is a symlink to ../.commit/SHA. # So dereference it. target = n.dereference() if opt.bupm: if not stat.S_ISDIR(n.mode): o.fatal('%r is not a directory' % target) mfile = n.metadata_file() # VFS file -- cannot close(). if mfile: meta_stream = mfile.open() sys.stdout.write(meta_stream.read()) elif opt.meta: sys.stdout.write(n.metadata().encode()) else: if stat.S_ISREG(n.mode): for b in chunkyreader(n.open()): sys.stdout.write(b) else: o.fatal('%r is not a plain file' % target) if saved_errors: log('warning: %d errors encountered\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/daemon-cmd.py000077500000000000000000000041201303127641400153540ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, getopt, socket, subprocess, fcntl from bup import options, path from bup.helpers import * optspec = """ bup daemon [options...] -- [bup-server options...] -- l,listen ip address to listen on, defaults to * p,port port to listen on, defaults to 1982 """ o = options.Options(optspec, optfunc=getopt.getopt) (opt, flags, extra) = o.parse(sys.argv[1:]) host = opt.listen port = opt.port and int(opt.port) or 1982 import socket import sys socks = [] e = None for res in socket.getaddrinfo(host, port, socket.AF_UNSPEC, socket.SOCK_STREAM, 0, socket.AI_PASSIVE): af, socktype, proto, canonname, sa = res try: s = socket.socket(af, socktype, proto) except socket.error as e: continue try: if af == socket.AF_INET6: log("bup daemon: listening on [%s]:%s\n" % sa[:2]) else: log("bup daemon: listening on %s:%s\n" % sa[:2]) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(sa) s.listen(1) fcntl.fcntl(s.fileno(), fcntl.F_SETFD, fcntl.FD_CLOEXEC) except socket.error as e: s.close() continue socks.append(s) if not socks: log('bup daemon: listen socket: %s\n' % e.args[1]) sys.exit(1) try: while True: [rl,wl,xl] = select.select(socks, [], [], 60) for l in rl: s, src = l.accept() try: log("Socket accepted connection from %s\n" % (src,)) fd1 = os.dup(s.fileno()) fd2 = os.dup(s.fileno()) s.close() sp = subprocess.Popen([path.exe(), 'mux', '--', path.exe(), 'server'] + extra, stdin=fd1, stdout=fd2) finally: os.close(fd1) os.close(fd2) finally: for l in socks: l.shutdown(socket.SHUT_RDWR) l.close() debug1("bup daemon: done") bup-0.29/cmd/damage-cmd.py000077500000000000000000000031271303127641400153350ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, random from bup import options from bup.helpers import log def randblock(n): l = [] for i in xrange(n): l.append(chr(random.randrange(0,256))) return ''.join(l) optspec = """ bup damage [-n count] [-s maxsize] [-S seed] -- WARNING: THIS COMMAND IS EXTREMELY DANGEROUS n,num= number of blocks to damage s,size= maximum size of each damaged block percent= maximum size of each damaged block (as a percent of entire file) equal spread damage evenly throughout the file S,seed= random number seed (for repeatable tests) """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if not extra: o.fatal('filenames expected') if opt.seed != None: random.seed(opt.seed) for name in extra: log('Damaging "%s"...\n' % name) f = open(name, 'r+b') st = os.fstat(f.fileno()) size = st.st_size if opt.percent or opt.size: ms1 = int(float(opt.percent or 0)/100.0*size) or size ms2 = opt.size or size maxsize = min(ms1, ms2) else: maxsize = 1 chunks = opt.num or 10 chunksize = size/chunks for r in range(chunks): sz = random.randrange(1, maxsize+1) if sz > size: sz = size if opt.equal: ofs = r*chunksize else: ofs = random.randrange(0, size - sz + 1) log(' %6d bytes at %d\n' % (sz, ofs)) f.seek(ofs) f.write(randblock(sz)) f.close() bup-0.29/cmd/drecurse-cmd.py000077500000000000000000000031511303127641400157300ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble from os.path import relpath import sys from bup import options, drecurse from bup.helpers import log, parse_excludes, parse_rx_excludes, saved_errors optspec = """ bup drecurse -- x,xdev,one-file-system don't cross filesystem boundaries exclude= a path to exclude from the backup (can be used more than once) exclude-from= a file that contains exclude paths (can be used more than once) exclude-rx= skip paths matching the unanchored regex (may be repeated) exclude-rx-from= skip --exclude-rx patterns in file (may be repeated) q,quiet don't actually print filenames profile run under the python profiler """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) != 1: o.fatal("exactly one filename expected") drecurse_top = extra[0] excluded_paths = parse_excludes(flags, o.fatal) if not drecurse_top.startswith('/'): excluded_paths = [relpath(x) for x in excluded_paths] exclude_rxs = parse_rx_excludes(flags, o.fatal) it = drecurse.recursive_dirlist([drecurse_top], opt.xdev, excluded_paths=excluded_paths, exclude_rxs=exclude_rxs) if opt.profile: import cProfile def do_it(): for i in it: pass cProfile.run('do_it()') else: if opt.quiet: for i in it: pass else: for (name,st) in it: print name if saved_errors: log('WARNING: %d errors encountered.\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/fsck-cmd.py000077500000000000000000000145261303127641400150520ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, glob, subprocess from bup import options, git from bup.helpers import Sha1, chunkyreader, istty2, log, progress par2_ok = 0 nullf = open('/dev/null') def debug(s): if opt.verbose > 1: log(s) def run(argv): # at least in python 2.5, using "stdout=2" or "stdout=sys.stderr" below # doesn't actually work, because subprocess closes fd #2 right before # execing for some reason. So we work around it by duplicating the fd # first. fd = os.dup(2) # copy stderr try: p = subprocess.Popen(argv, stdout=fd, close_fds=False) return p.wait() finally: os.close(fd) def par2_setup(): global par2_ok rv = 1 try: p = subprocess.Popen(['par2', '--help'], stdout=nullf, stderr=nullf, stdin=nullf) rv = p.wait() except OSError: log('fsck: warning: par2 not found; disabling recovery features.\n') else: par2_ok = 1 def parv(lvl): if opt.verbose >= lvl: if istty2: return [] else: return ['-q'] else: return ['-qq'] def par2_generate(base): return run(['par2', 'create', '-n1', '-c200'] + parv(2) + ['--', base, base+'.pack', base+'.idx']) def par2_verify(base): return run(['par2', 'verify'] + parv(3) + ['--', base]) def par2_repair(base): return run(['par2', 'repair'] + parv(2) + ['--', base]) def quick_verify(base): f = open(base + '.pack', 'rb') f.seek(-20, 2) wantsum = f.read(20) assert(len(wantsum) == 20) f.seek(0) sum = Sha1() for b in chunkyreader(f, os.fstat(f.fileno()).st_size - 20): sum.update(b) if sum.digest() != wantsum: raise ValueError('expected %r, got %r' % (wantsum.encode('hex'), sum.hexdigest())) def git_verify(base): if opt.quick: try: quick_verify(base) except Exception as e: log('error: %s\n' % e) return 1 return 0 else: return run(['git', 'verify-pack', '--', base]) def do_pack(base, last, par2_exists): code = 0 if par2_ok and par2_exists and (opt.repair or not opt.generate): vresult = par2_verify(base) if vresult != 0: if opt.repair: rresult = par2_repair(base) if rresult != 0: action_result = 'failed' log('%s par2 repair: failed (%d)\n' % (last, rresult)) code = rresult else: action_result = 'repaired' log('%s par2 repair: succeeded (0)\n' % last) code = 100 else: action_result = 'failed' log('%s par2 verify: failed (%d)\n' % (last, vresult)) code = vresult else: action_result = 'ok' elif not opt.generate or (par2_ok and not par2_exists): gresult = git_verify(base) if gresult != 0: action_result = 'failed' log('%s git verify: failed (%d)\n' % (last, gresult)) code = gresult else: if par2_ok and opt.generate: presult = par2_generate(base) if presult != 0: action_result = 'failed' log('%s par2 create: failed (%d)\n' % (last, presult)) code = presult else: action_result = 'generated' else: action_result = 'ok' else: assert(opt.generate and (not par2_ok or par2_exists)) action_result = 'exists' if par2_exists else 'skipped' if opt.verbose: print last, action_result return code optspec = """ bup fsck [options...] [filenames...] -- r,repair attempt to repair errors using par2 (dangerous!) g,generate generate auto-repair information using par2 v,verbose increase verbosity (can be used more than once) quick just check pack sha1sum, don't use git verify-pack j,jobs= run 'n' jobs in parallel par2-ok immediately return 0 if par2 is ok, 1 if not disable-par2 ignore par2 even if it is available """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) par2_setup() if opt.par2_ok: if par2_ok: sys.exit(0) # 'true' in sh else: sys.exit(1) if opt.disable_par2: par2_ok = 0 git.check_repo_or_die() if not extra: debug('fsck: No filenames given: checking all packs.\n') extra = glob.glob(git.repo('objects/pack/*.pack')) code = 0 count = 0 outstanding = {} for name in extra: if name.endswith('.pack'): base = name[:-5] elif name.endswith('.idx'): base = name[:-4] elif name.endswith('.par2'): base = name[:-5] elif os.path.exists(name + '.pack'): base = name else: raise Exception('%s is not a pack file!' % name) (dir,last) = os.path.split(base) par2_exists = os.path.exists(base + '.par2') if par2_exists and os.stat(base + '.par2').st_size == 0: par2_exists = 0 sys.stdout.flush() debug('fsck: checking %s (%s)\n' % (last, par2_ok and par2_exists and 'par2' or 'git')) if not opt.verbose: progress('fsck (%d/%d)\r' % (count, len(extra))) if not opt.jobs: nc = do_pack(base, last, par2_exists) code = code or nc count += 1 else: while len(outstanding) >= opt.jobs: (pid,nc) = os.wait() nc >>= 8 if pid in outstanding: del outstanding[pid] code = code or nc count += 1 pid = os.fork() if pid: # parent outstanding[pid] = 1 else: # child try: sys.exit(do_pack(base, last, par2_exists)) except Exception as e: log('exception: %r\n' % e) sys.exit(99) while len(outstanding): (pid,nc) = os.wait() nc >>= 8 if pid in outstanding: del outstanding[pid] code = code or nc count += 1 if not opt.verbose: progress('fsck (%d/%d)\r' % (count, len(extra))) if istty2: debug('fsck done. \n') sys.exit(code) bup-0.29/cmd/ftp-cmd.py000077500000000000000000000147471303127641400147220ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, stat, fnmatch from bup import options, git, shquote, vfs, ls from bup.helpers import chunkyreader, handle_ctrl_c, log handle_ctrl_c() class OptionError(Exception): pass # Check out lib/bup/ls.py for the opt spec def do_ls(cmd_args): try: ls.do_ls(cmd_args, pwd, onabort=OptionError) except OptionError as e: return def write_to_file(inf, outf): for blob in chunkyreader(inf): outf.write(blob) def inputiter(): if os.isatty(sys.stdin.fileno()): while 1: try: yield raw_input('bup> ') except EOFError: print '' # Clear the line for the terminal's next prompt break else: for line in sys.stdin: yield line def _completer_get_subs(line): (qtype, lastword) = shquote.unfinished_word(line) (dir,name) = os.path.split(lastword) #log('\ncompleter: %r %r %r\n' % (qtype, lastword, text)) try: n = pwd.resolve(dir) subs = list(filter(lambda x: x.name.startswith(name), n.subs())) except vfs.NoSuchFile as e: subs = [] return (dir, name, qtype, lastword, subs) def find_readline_lib(): """Return the name (and possibly the full path) of the readline library linked to the given readline module. """ import readline f = open(readline.__file__, "rb") try: data = f.read() finally: f.close() import re m = re.search('\0([^\0]*libreadline[^\0]*)\0', data) if m: return m.group(1) return None def init_readline_vars(): """Work around trailing space automatically inserted by readline. See http://bugs.python.org/issue5833""" try: import ctypes except ImportError: # python before 2.5 didn't have the ctypes module; but those # old systems probably also didn't have this readline bug, so # just ignore it. return lib_name = find_readline_lib() if lib_name is not None: lib = ctypes.cdll.LoadLibrary(lib_name) global rl_completion_suppress_append rl_completion_suppress_append = ctypes.c_int.in_dll(lib, "rl_completion_suppress_append") rl_completion_suppress_append = None _last_line = None _last_res = None def completer(text, state): global _last_line global _last_res global rl_completion_suppress_append if rl_completion_suppress_append is not None: rl_completion_suppress_append.value = 1 try: line = readline.get_line_buffer()[:readline.get_endidx()] if _last_line != line: _last_res = _completer_get_subs(line) _last_line = line (dir, name, qtype, lastword, subs) = _last_res if state < len(subs): sn = subs[state] sn1 = sn.try_resolve() # find the type of any symlink target fullname = os.path.join(dir, sn.name) if stat.S_ISDIR(sn1.mode): ret = shquote.what_to_add(qtype, lastword, fullname+'/', terminate=False) else: ret = shquote.what_to_add(qtype, lastword, fullname, terminate=True) + ' ' return text + ret except Exception as e: log('\n') try: import traceback traceback.print_tb(sys.exc_traceback) except Exception as e2: log('Error printing traceback: %s\n' % e2) log('\nError in completion: %s\n' % e) optspec = """ bup ftp [commands...] """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) git.check_repo_or_die() top = vfs.RefList(None) pwd = top rv = 0 if extra: lines = extra else: try: import readline except ImportError: log('* readline module not available: line editing disabled.\n') readline = None if readline: readline.set_completer_delims(' \t\n\r/') readline.set_completer(completer) if sys.platform.startswith('darwin'): # MacOS uses a slighly incompatible clone of libreadline readline.parse_and_bind('bind ^I rl_complete') readline.parse_and_bind('tab: complete') init_readline_vars() lines = inputiter() for line in lines: if not line.strip(): continue words = [word for (wordstart,word) in shquote.quotesplit(line)] cmd = words[0].lower() #log('execute: %r %r\n' % (cmd, parm)) try: if cmd == 'ls': do_ls(words[1:]) elif cmd == 'cd': np = pwd for parm in words[1:]: np = np.resolve(parm) if not stat.S_ISDIR(np.mode): raise vfs.NotDir('%s is not a directory' % parm) pwd = np elif cmd == 'pwd': print pwd.fullname() elif cmd == 'cat': for parm in words[1:]: write_to_file(pwd.resolve(parm).open(), sys.stdout) elif cmd == 'get': if len(words) not in [2,3]: rv = 1 raise Exception('Usage: get [localname]') rname = words[1] (dir,base) = os.path.split(rname) lname = len(words)>2 and words[2] or base inf = pwd.resolve(rname).open() log('Saving %r\n' % lname) write_to_file(inf, open(lname, 'wb')) elif cmd == 'mget': for parm in words[1:]: (dir,base) = os.path.split(parm) for n in pwd.resolve(dir).subs(): if fnmatch.fnmatch(n.name, base): try: log('Saving %r\n' % n.name) inf = n.open() outf = open(n.name, 'wb') write_to_file(inf, outf) outf.close() except Exception as e: rv = 1 log(' error: %s\n' % e) elif cmd == 'help' or cmd == '?': log('Commands: ls cd pwd cat get mget help quit\n') elif cmd == 'quit' or cmd == 'exit' or cmd == 'bye': break else: rv = 1 raise Exception('no such command %r' % cmd) except Exception as e: rv = 1 log('error: %s\n' % e) #raise sys.exit(rv) bup-0.29/cmd/fuse-cmd.py000077500000000000000000000102421303127641400150550ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, errno from bup import options, git, vfs, xstat from bup.helpers import buglvl, log try: import fuse except ImportError: log('error: cannot find the python "fuse" module; please install it\n') sys.exit(1) cache = {} def cache_get(top, path): parts = path.split('/') cache[('',)] = top c = None max = len(parts) if buglvl >= 1: log('cache: %r\n' % cache.keys()) for i in range(max): pre = parts[:max-i] if buglvl >= 1: log('cache trying: %r\n' % pre) c = cache.get(tuple(pre)) if c: rest = parts[max-i:] for r in rest: if buglvl >= 1: log('resolving %r from %r\n' % (r, c.fullname())) c = c.lresolve(r) key = tuple(pre + [r]) if buglvl >= 1: log('saving: %r\n' % (key,)) cache[key] = c break assert(c) return c class BupFs(fuse.Fuse): def __init__(self, top, meta=False, verbose=0): fuse.Fuse.__init__(self) self.top = top self.meta = meta self.verbose = verbose def getattr(self, path): if self.verbose > 0: log('--getattr(%r)\n' % path) try: node = cache_get(self.top, path) st = fuse.Stat(st_mode=node.mode, st_nlink=node.nlinks(), # Until/unless we store the size in m. st_size=node.size()) if self.meta: m = node.metadata() if m: st.st_mode = m.mode st.st_uid = m.uid st.st_gid = m.gid st.st_atime = max(0, xstat.fstime_floor_secs(m.atime)) st.st_mtime = max(0, xstat.fstime_floor_secs(m.mtime)) st.st_ctime = max(0, xstat.fstime_floor_secs(m.ctime)) return st except vfs.NoSuchFile: return -errno.ENOENT def readdir(self, path, offset): if self.verbose > 0: log('--readdir(%r)\n' % path) node = cache_get(self.top, path) yield fuse.Direntry('.') yield fuse.Direntry('..') for sub in node.subs(): yield fuse.Direntry(sub.name) def readlink(self, path): if self.verbose > 0: log('--readlink(%r)\n' % path) node = cache_get(self.top, path) return node.readlink() def open(self, path, flags): if self.verbose > 0: log('--open(%r)\n' % path) node = cache_get(self.top, path) accmode = os.O_RDONLY | os.O_WRONLY | os.O_RDWR if (flags & accmode) != os.O_RDONLY: return -errno.EACCES node.open() def release(self, path, flags): if self.verbose > 0: log('--release(%r)\n' % path) def read(self, path, size, offset): if self.verbose > 0: log('--read(%r)\n' % path) n = cache_get(self.top, path) o = n.open() o.seek(offset) return o.read(size) if not hasattr(fuse, '__version__'): raise RuntimeError, "your fuse module is too old for fuse.__version__" fuse.fuse_python_api = (0, 2) optspec = """ bup fuse [-d] [-f] -- f,foreground run in foreground d,debug run in the foreground and display FUSE debug information o,allow-other allow other users to access the filesystem meta report original metadata for paths when available v,verbose increase log output (can be used more than once) """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) != 1: o.fatal("exactly one argument expected") git.check_repo_or_die() top = vfs.RefList(None) f = BupFs(top, meta=opt.meta, verbose=opt.verbose) f.fuse_args.mountpoint = extra[0] if opt.debug: f.fuse_args.add('debug') if opt.foreground: f.fuse_args.setmod('foreground') print f.multithreaded f.multithreaded = False if opt.allow_other: f.fuse_args.add('allow_other') f.main() bup-0.29/cmd/gc-cmd.py000077500000000000000000000024501303127641400145060ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys from bup import git, options from bup.gc import bup_gc from bup.helpers import die_if_errors, handle_ctrl_c, log optspec = """ bup gc [options...] -- v,verbose increase log output (can be used more than once) threshold= only rewrite a packfile if it's over this percent garbage [10] #,compress= set compression level to # (0-9, 9 is highest) [1] unsafe use the command even though it may be DANGEROUS """ # FIXME: server mode? # FIXME: make sure client handles server-side changes reasonably handle_ctrl_c() o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if not opt.unsafe: o.fatal('refusing to run dangerous, experimental command without --unsafe') if extra: o.fatal('no positional parameters expected') if opt.threshold: try: opt.threshold = int(opt.threshold) except ValueError: o.fatal('threshold must be an integer percentage value') if opt.threshold < 0 or opt.threshold > 100: o.fatal('threshold must be an integer percentage value') git.check_repo_or_die() bup_gc(threshold=opt.threshold, compression=opt.compress, verbosity=opt.verbose) die_if_errors() bup-0.29/cmd/help-cmd.py000077500000000000000000000016761303127641400150560ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, glob from bup import options, path optspec = """ bup help """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) == 0: # the wrapper program provides the default usage string os.execvp(os.environ['BUP_MAIN_EXE'], ['bup']) elif len(extra) == 1: docname = (extra[0]=='bup' and 'bup' or ('bup-%s' % extra[0])) manpath = os.path.join(path.exedir(), 'Documentation/' + docname + '.[1-9]') g = glob.glob(manpath) try: if g: os.execvp('man', ['man', '-l', g[0]]) else: os.execvp('man', ['man', docname]) except OSError as e: sys.stderr.write('Unable to run man command: %s\n' % e) sys.exit(1) else: o.fatal("exactly one command name expected") bup-0.29/cmd/import-duplicity-cmd.py000077500000000000000000000054101303127641400174320ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble from calendar import timegm from pipes import quote from subprocess import check_call from time import strftime, strptime import sys import tempfile from bup import git, options, vfs from bup.helpers import handle_ctrl_c, log, readpipe, saved_errors, unlink import bup.path optspec = """ bup import-duplicity [-n] -- n,dry-run don't do anything; just print what would be done """ def logcmd(cmd): if isinstance(cmd, basestring): log(cmd + '\n') else: log(' '.join(map(quote, cmd)) + '\n') def exc(cmd, shell=False): global opt logcmd(cmd) if not opt.dry_run: check_call(cmd, shell=shell) def exo(cmd, shell=False): global opt logcmd(cmd) if not opt.dry_run: return readpipe(cmd, shell=shell) handle_ctrl_c() log('\nbup: import-duplicity is EXPERIMENTAL (proceed with caution)\n\n') o = options.Options(optspec) opt, flags, extra = o.parse(sys.argv[1:]) if len(extra) < 1 or not extra[0]: o.fatal('duplicity source URL required') if len(extra) < 2 or not extra[1]: o.fatal('bup destination save name required') if len(extra) > 2: o.fatal('too many arguments') source_url, save_name = extra bup = bup.path.exe() git.check_repo_or_die() top = vfs.RefList(None) tmpdir = tempfile.mkdtemp(prefix='bup-import-dup-') try: dup = ['duplicity', '--archive-dir', tmpdir + '/dup-cache'] restoredir = tmpdir + '/restore' tmpidx = tmpdir + '/index' collection_status = \ exo(' '.join(map(quote, dup)) + ' collection-status --log-fd=3 %s 3>&1 1>&2' % quote(source_url), shell=True) # Duplicity output lines of interest look like this (one leading space): # full 20150222T073111Z 1 noenc # inc 20150222T073233Z 1 noenc dup_timestamps = [] for line in collection_status.splitlines(): if line.startswith(' inc '): assert(len(line) >= len(' inc 20150222T073233Z')) dup_timestamps.append(line[5:21]) elif line.startswith(' full '): assert(len(line) >= len(' full 20150222T073233Z')) dup_timestamps.append(line[6:22]) for i, dup_ts in enumerate(dup_timestamps): tm = strptime(dup_ts, '%Y%m%dT%H%M%SZ') exc(['rm', '-rf', restoredir]) exc(dup + ['restore', '-t', dup_ts, source_url, restoredir]) exc([bup, 'index', '-uxf', tmpidx, restoredir]) exc([bup, 'save', '--strip', '--date', str(timegm(tm)), '-f', tmpidx, '-n', save_name, restoredir]) finally: exc(['rm', '-rf', tmpdir]) if saved_errors: log('warning: %d errors encountered\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/import-rdiff-backup-cmd.sh000077500000000000000000000033101303127641400177400ustar00rootroot00000000000000#!/usr/bin/env bash set -o pipefail must() { local file=${BASH_SOURCE[0]} local line=${BASH_LINENO[0]} "$@" local rc=$? if test $rc -ne 0; then echo "Failed at line $line in $file" 1>&2 exit $rc fi } usage() { echo "Usage: bup import-rdiff-backup [-n]" \ " " echo "-n,--dry-run: just print what would be done" exit 1 } control_c() { echo "bup import-rdiff-backup: signal 2 received" 1>&2 exit 128 } must trap control_c INT dry_run= while [ "$1" = "-n" -o "$1" = "--dry-run" ]; do dry_run=echo shift done bup() { $dry_run "${BUP_MAIN_EXE:=bup}" "$@" } snapshot_root="$1" branch="$2" [ -n "$snapshot_root" -a "$#" = 2 ] || usage if [ ! -e "$snapshot_root/." ]; then echo "'$snapshot_root' isn't a directory!" exit 1 fi backups=$(must rdiff-backup --list-increments --parsable-output "$snapshot_root") \ || exit $? backups_count=$(echo "$backups" | must wc -l) || exit $? counter=1 echo "$backups" | while read timestamp type; do tmpdir=$(must mktemp -d import-rdiff-backup-XXXXXXX) || exit $? echo "Importing backup from $(date -d @$timestamp +%c) " \ "($counter / $backups_count)" 1>&2 echo 1>&2 echo "Restoring from rdiff-backup..." 1>&2 must rdiff-backup -r $timestamp "$snapshot_root" "$tmpdir" echo 1>&2 echo "Importing into bup..." 1>&2 TMPIDX=$(must mktemp -u import-rdiff-backup-idx-XXXXXXX) || exit $? must bup index -ux -f "$tmpidx" "$tmpdir" must bup save --strip --date="$timestamp" -f "$tmpidx" -n "$branch" "$tmpdir" must rm -f "$tmpidx" must rm -rf "$tmpdir" counter=$((counter+1)) echo 1>&2 echo 1>&2 done bup-0.29/cmd/import-rsnapshot-cmd.sh000077500000000000000000000025521303127641400174330ustar00rootroot00000000000000#!/bin/sh # Does an import of a rsnapshot archive. usage() { echo "Usage: bup import-rsnapshot [-n]" \ " []" echo "-n,--dry-run: just print what would be done" exit 1 } DRY_RUN= while [ "$1" = "-n" -o "$1" = "--dry-run" ]; do DRY_RUN=echo shift done bup() { $DRY_RUN "${BUP_MAIN_EXE:=bup}" "$@" } SNAPSHOT_ROOT=$1 TARGET=$2 [ -n "$SNAPSHOT_ROOT" -a "$#" -le 2 ] || usage if [ ! -e "$SNAPSHOT_ROOT/." ]; then echo "'$SNAPSHOT_ROOT' isn't a directory!" exit 1 fi cd "$SNAPSHOT_ROOT" || exit 2 for SNAPSHOT in *; do [ -e "$SNAPSHOT/." ] || continue echo "snapshot='$SNAPSHOT'" >&2 for BRANCH_PATH in "$SNAPSHOT/"*; do BRANCH=$(basename "$BRANCH_PATH") || exit $? [ -e "$BRANCH_PATH/." ] || continue [ -z "$TARGET" -o "$TARGET" = "$BRANCH" ] || continue echo "snapshot='$SNAPSHOT' branch='$BRANCH'" >&2 # Get the snapshot's ctime DATE=$(perl -e '@a=stat($ARGV[0]) or die "$ARGV[0]: $!"; print $a[10];' "$BRANCH_PATH") [ -n "$DATE" ] || exit 3 TMPIDX=bupindex.$BRANCH.tmp bup index -ux -f "$TMPIDX" "$BRANCH_PATH/" || exit $? bup save --strip --date="$DATE" \ -f "$TMPIDX" -n "$BRANCH" \ "$BRANCH_PATH/" || exit $? rm "$TMPIDX" || exit $? done done bup-0.29/cmd/index-cmd.py000077500000000000000000000261761303127641400152370ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, stat, time, os, errno, re from bup import metadata, options, git, index, drecurse, hlinkdb from bup.drecurse import recursive_dirlist from bup.hashsplit import GIT_MODE_TREE, GIT_MODE_FILE from bup.helpers import (add_error, handle_ctrl_c, log, parse_excludes, parse_rx_excludes, progress, qprogress, saved_errors) class IterHelper: def __init__(self, l): self.i = iter(l) self.cur = None self.next() def next(self): try: self.cur = self.i.next() except StopIteration: self.cur = None return self.cur def check_index(reader): try: log('check: checking forward iteration...\n') e = None d = {} for e in reader.forward_iter(): if e.children_n: if opt.verbose: log('%08x+%-4d %r\n' % (e.children_ofs, e.children_n, e.name)) assert(e.children_ofs) assert(e.name.endswith('/')) assert(not d.get(e.children_ofs)) d[e.children_ofs] = 1 if e.flags & index.IX_HASHVALID: assert(e.sha != index.EMPTY_SHA) assert(e.gitmode) assert(not e or e.name == '/') # last entry is *always* / log('check: checking normal iteration...\n') last = None for e in reader: if last: assert(last > e.name) last = e.name except: log('index error! at %r\n' % e) raise log('check: passed.\n') def clear_index(indexfile): indexfiles = [indexfile, indexfile + '.meta', indexfile + '.hlink'] for indexfile in indexfiles: path = git.repo(indexfile) try: os.remove(path) if opt.verbose: log('clear: removed %s\n' % path) except OSError as e: if e.errno != errno.ENOENT: raise def update_index(top, excluded_paths, exclude_rxs, xdev_exceptions): # tmax and start must be epoch nanoseconds. tmax = (time.time() - 1) * 10**9 ri = index.Reader(indexfile) msw = index.MetaStoreWriter(indexfile + '.meta') wi = index.Writer(indexfile, msw, tmax) rig = IterHelper(ri.iter(name=top)) tstart = int(time.time()) * 10**9 hlinks = hlinkdb.HLinkDB(indexfile + '.hlink') fake_hash = None if opt.fake_valid: def fake_hash(name): return (GIT_MODE_FILE, index.FAKE_SHA) total = 0 bup_dir = os.path.abspath(git.repo()) index_start = time.time() for path, pst in recursive_dirlist([top], xdev=opt.xdev, bup_dir=bup_dir, excluded_paths=excluded_paths, exclude_rxs=exclude_rxs, xdev_exceptions=xdev_exceptions): if opt.verbose>=2 or (opt.verbose==1 and stat.S_ISDIR(pst.st_mode)): sys.stdout.write('%s\n' % path) sys.stdout.flush() elapsed = time.time() - index_start paths_per_sec = total / elapsed if elapsed else 0 qprogress('Indexing: %d (%d paths/s)\r' % (total, paths_per_sec)) elif not (total % 128): elapsed = time.time() - index_start paths_per_sec = total / elapsed if elapsed else 0 qprogress('Indexing: %d (%d paths/s)\r' % (total, paths_per_sec)) total += 1 while rig.cur and rig.cur.name > path: # deleted paths if rig.cur.exists(): rig.cur.set_deleted() rig.cur.repack() if rig.cur.nlink > 1 and not stat.S_ISDIR(rig.cur.mode): hlinks.del_path(rig.cur.name) rig.next() if rig.cur and rig.cur.name == path: # paths that already existed need_repack = False if(rig.cur.stale(pst, tstart, check_device=opt.check_device)): try: meta = metadata.from_path(path, statinfo=pst) except (OSError, IOError) as e: add_error(e) rig.next() continue if not stat.S_ISDIR(rig.cur.mode) and rig.cur.nlink > 1: hlinks.del_path(rig.cur.name) if not stat.S_ISDIR(pst.st_mode) and pst.st_nlink > 1: hlinks.add_path(path, pst.st_dev, pst.st_ino) # Clear these so they don't bloat the store -- they're # already in the index (since they vary a lot and they're # fixed length). If you've noticed "tmax", you might # wonder why it's OK to do this, since that code may # adjust (mangle) the index mtime and ctime -- producing # fake values which must not end up in a .bupm. However, # it looks like that shouldn't be possible: (1) When # "save" validates the index entry, it always reads the # metadata from the filesytem. (2) Metadata is only # read/used from the index if hashvalid is true. (3) # "faked" entries will be stale(), and so we'll invalidate # them below. meta.ctime = meta.mtime = meta.atime = 0 meta_ofs = msw.store(meta) rig.cur.update_from_stat(pst, meta_ofs) rig.cur.invalidate() need_repack = True if not (rig.cur.flags & index.IX_HASHVALID): if fake_hash: rig.cur.gitmode, rig.cur.sha = fake_hash(path) rig.cur.flags |= index.IX_HASHVALID need_repack = True if opt.fake_invalid: rig.cur.invalidate() need_repack = True if need_repack: rig.cur.repack() rig.next() else: # new paths try: meta = metadata.from_path(path, statinfo=pst) except (OSError, IOError) as e: add_error(e) continue # See same assignment to 0, above, for rationale. meta.atime = meta.mtime = meta.ctime = 0 meta_ofs = msw.store(meta) wi.add(path, pst, meta_ofs, hashgen=fake_hash) if not stat.S_ISDIR(pst.st_mode) and pst.st_nlink > 1: hlinks.add_path(path, pst.st_dev, pst.st_ino) elapsed = time.time() - index_start paths_per_sec = total / elapsed if elapsed else 0 progress('Indexing: %d, done (%d paths/s).\n' % (total, paths_per_sec)) hlinks.prepare_save() if ri.exists(): ri.save() wi.flush() if wi.count: wr = wi.new_reader() if opt.check: log('check: before merging: oldfile\n') check_index(ri) log('check: before merging: newfile\n') check_index(wr) mi = index.Writer(indexfile, msw, tmax) for e in index.merge(ri, wr): # FIXME: shouldn't we remove deleted entries eventually? When? mi.add_ixentry(e) ri.close() mi.close() wr.close() wi.abort() else: wi.close() msw.close() hlinks.commit_save() optspec = """ bup index <-p|-m|-s|-u|--clear|--check> [options...] -- Modes: p,print print the index entries for the given names (also works with -u) m,modified print only added/deleted/modified files (implies -p) s,status print each filename with a status char (A/M/D) (implies -p) u,update recursively update the index entries for the given file/dir names (default if no mode is specified) check carefully check index file integrity clear clear the default index Options: H,hash print the hash for each object next to its name l,long print more information about each file no-check-device don't invalidate an entry if the containing device changes fake-valid mark all index entries as up-to-date even if they aren't fake-invalid mark all index entries as invalid f,indexfile= the name of the index file (normally BUP_DIR/bupindex) exclude= a path to exclude from the backup (may be repeated) exclude-from= skip --exclude paths in file (may be repeated) exclude-rx= skip paths matching the unanchored regex (may be repeated) exclude-rx-from= skip --exclude-rx patterns in file (may be repeated) v,verbose increase log output (can be used more than once) x,xdev,one-file-system don't cross filesystem boundaries """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if not (opt.modified or \ opt['print'] or \ opt.status or \ opt.update or \ opt.check or \ opt.clear): opt.update = 1 if (opt.fake_valid or opt.fake_invalid) and not opt.update: o.fatal('--fake-{in,}valid are meaningless without -u') if opt.fake_valid and opt.fake_invalid: o.fatal('--fake-valid is incompatible with --fake-invalid') if opt.clear and opt.indexfile: o.fatal('cannot clear an external index (via -f)') # FIXME: remove this once we account for timestamp races, i.e. index; # touch new-file; index. It's possible for this to happen quickly # enough that new-file ends up with the same timestamp as the first # index, and then bup will ignore it. tick_start = time.time() time.sleep(1 - (tick_start - int(tick_start))) git.check_repo_or_die() indexfile = opt.indexfile or git.repo('bupindex') handle_ctrl_c() if opt.check: log('check: starting initial check.\n') check_index(index.Reader(indexfile)) if opt.clear: log('clear: clearing index.\n') clear_index(indexfile) if opt.update: if not extra: o.fatal('update mode (-u) requested but no paths given') excluded_paths = parse_excludes(flags, o.fatal) exclude_rxs = parse_rx_excludes(flags, o.fatal) xexcept = index.unique_resolved_paths(extra) for rp, path in index.reduce_paths(extra): update_index(rp, excluded_paths, exclude_rxs, xdev_exceptions=xexcept) if opt['print'] or opt.status or opt.modified: for (name, ent) in index.Reader(indexfile).filter(extra or ['']): if (opt.modified and (ent.is_valid() or ent.is_deleted() or not ent.mode)): continue line = '' if opt.status: if ent.is_deleted(): line += 'D ' elif not ent.is_valid(): if ent.sha == index.EMPTY_SHA: line += 'A ' else: line += 'M ' else: line += ' ' if opt.hash: line += ent.sha.encode('hex') + ' ' if opt.long: line += "%7s %7s " % (oct(ent.mode), oct(ent.gitmode)) print line + (name or './') if opt.check and (opt['print'] or opt.status or opt.modified or opt.update): log('check: starting final check.\n') check_index(index.Reader(indexfile)) if saved_errors: log('WARNING: %d errors encountered.\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/init-cmd.py000077500000000000000000000012731303127641400150620ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys from bup import git, options, client from bup.helpers import log, saved_errors optspec = """ [BUP_DIR=...] bup init [-r host:path] -- r,remote= remote repository path """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra: o.fatal("no arguments expected") try: git.init_repo() # local repo except git.GitError as e: log("bup: error: could not init repository: %s" % e) sys.exit(1) if opt.remote: git.check_repo_or_die() cli = client.Client(opt.remote, create=True) cli.close() bup-0.29/cmd/join-cmd.py000077500000000000000000000016131303127641400150540ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys from bup import git, options, client from bup.helpers import linereader, log optspec = """ bup join [-r host:path] [refs or hashes...] -- r,remote= remote repository path o= output filename """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) git.check_repo_or_die() if not extra: extra = linereader(sys.stdin) ret = 0 if opt.remote: cli = client.Client(opt.remote) cat = cli.cat else: cp = git.CatPipe() cat = cp.join if opt.o: outfile = open(opt.o, 'wb') else: outfile = sys.stdout for id in extra: try: for blob in cat(id): outfile.write(blob) except KeyError as e: outfile.flush() log('error: %s\n' % e) ret = 1 sys.exit(ret) bup-0.29/cmd/list-idx-cmd.py000077500000000000000000000027241303127641400156560ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os from bup import git, options from bup.helpers import add_error, handle_ctrl_c, log, qprogress, saved_errors optspec = """ bup list-idx [--find=] -- find= display only objects that start with """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) handle_ctrl_c() opt.find = opt.find or '' if not extra: o.fatal('you must provide at least one filename') if len(opt.find) > 40: o.fatal('--find parameter must be <= 40 chars long') else: if len(opt.find) % 2: s = opt.find + '0' else: s = opt.find try: bin = s.decode('hex') except TypeError: o.fatal('--find parameter is not a valid hex string') find = opt.find.lower() count = 0 for name in extra: try: ix = git.open_idx(name) except git.GitError as e: add_error('%s: %s' % (name, e)) continue if len(opt.find) == 40: if ix.exists(bin): print name, find else: # slow, exhaustive search for _i in ix: i = str(_i).encode('hex') if i.startswith(find): print name, i qprogress('Searching: %d\r' % count) count += 1 if saved_errors: log('WARNING: %d errors encountered while saving.\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/ls-cmd.py000077500000000000000000000005471303127641400145400ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys from bup import git, vfs, ls git.check_repo_or_die() top = vfs.RefList(None) # Check out lib/bup/ls.py for the opt spec ret = ls.do_ls(sys.argv[1:], top, default='/', spec_prefix='bup ') sys.exit(ret) bup-0.29/cmd/margin-cmd.py000077500000000000000000000040351303127641400153730ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, struct, math from bup import options, git, _helpers from bup.helpers import log POPULATION_OF_EARTH=6.7e9 # as of September, 2010 optspec = """ bup margin -- predict Guess object offsets and report the maximum deviation ignore-midx Don't use midx files; use only plain pack idx files. """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra: o.fatal("no arguments expected") git.check_repo_or_die() git.ignore_midx = opt.ignore_midx mi = git.PackIdxList(git.repo('objects/pack')) def do_predict(ix): total = len(ix) maxdiff = 0 for count,i in enumerate(ix): prefix = struct.unpack('!Q', i[:8])[0] expected = prefix * total / (1<<64) diff = count - expected maxdiff = max(maxdiff, abs(diff)) print '%d of %d (%.3f%%) ' % (maxdiff, len(ix), maxdiff*100.0/len(ix)) sys.stdout.flush() assert(count+1 == len(ix)) if opt.predict: if opt.ignore_midx: for pack in mi.packs: do_predict(pack) else: do_predict(mi) else: # default mode: find longest matching prefix last = '\0'*20 longmatch = 0 for i in mi: if i == last: continue #assert(str(i) >= last) pm = _helpers.bitmatch(last, i) longmatch = max(longmatch, pm) last = i print longmatch log('%d matching prefix bits\n' % longmatch) doublings = math.log(len(mi), 2) bpd = longmatch / doublings log('%.2f bits per doubling\n' % bpd) remain = 160 - longmatch rdoublings = remain / bpd log('%d bits (%.2f doublings) remaining\n' % (remain, rdoublings)) larger = 2**rdoublings log('%g times larger is possible\n' % larger) perperson = larger/POPULATION_OF_EARTH log('\nEveryone on earth could have %d data sets like yours, all in one\n' 'repository, and we would expect 1 object collision.\n' % int(perperson)) bup-0.29/cmd/memtest-cmd.py000077500000000000000000000073641303127641400156040ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, re, struct, time, resource from bup import git, bloom, midx, options, _helpers from bup.helpers import handle_ctrl_c handle_ctrl_c() _linux_warned = 0 def linux_memstat(): global _linux_warned #fields = ['VmSize', 'VmRSS', 'VmData', 'VmStk', 'ms'] d = {} try: f = open('/proc/self/status') except IOError as e: if not _linux_warned: log('Warning: %s\n' % e) _linux_warned = 1 return {} for line in f: # Note that on Solaris, this file exists but is binary. If that # happens, this split() might not return two elements. We don't # really need to care about the binary format since this output # isn't used for much and report() can deal with missing entries. t = re.split(r':\s*', line.strip(), 1) if len(t) == 2: k,v = t d[k] = v return d last = last_u = last_s = start = 0 def report(count): global last, last_u, last_s, start headers = ['RSS', 'MajFlt', 'user', 'sys', 'ms'] ru = resource.getrusage(resource.RUSAGE_SELF) now = time.time() rss = int(ru.ru_maxrss/1024) if not rss: rss = linux_memstat().get('VmRSS', '??') fields = [rss, ru.ru_majflt, int((ru.ru_utime - last_u) * 1000), int((ru.ru_stime - last_s) * 1000), int((now - last) * 1000)] fmt = '%9s ' + ('%10s ' * len(fields)) if count >= 0: print fmt % tuple([count] + fields) else: start = now print fmt % tuple([''] + headers) sys.stdout.flush() # don't include time to run report() in usage counts ru = resource.getrusage(resource.RUSAGE_SELF) last_u = ru.ru_utime last_s = ru.ru_stime last = time.time() optspec = """ bup memtest [-n elements] [-c cycles] -- n,number= number of objects per cycle [10000] c,cycles= number of cycles to run [100] ignore-midx ignore .midx files, use only .idx files existing test with existing objects instead of fake ones """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra: o.fatal('no arguments expected') git.ignore_midx = opt.ignore_midx git.check_repo_or_die() m = git.PackIdxList(git.repo('objects/pack')) report(-1) _helpers.random_sha() report(0) if opt.existing: def foreverit(mi): while 1: for e in mi: yield e objit = iter(foreverit(m)) for c in xrange(opt.cycles): for n in xrange(opt.number): if opt.existing: bin = objit.next() assert(m.exists(bin)) else: bin = _helpers.random_sha() # technically, a randomly generated object id might exist. # but the likelihood of that is the likelihood of finding # a collision in sha-1 by accident, which is so unlikely that # we don't care. assert(not m.exists(bin)) report((c+1)*opt.number) if bloom._total_searches: print ('bloom: %d objects searched in %d steps: avg %.3f steps/object' % (bloom._total_searches, bloom._total_steps, bloom._total_steps*1.0/bloom._total_searches)) if midx._total_searches: print ('midx: %d objects searched in %d steps: avg %.3f steps/object' % (midx._total_searches, midx._total_steps, midx._total_steps*1.0/midx._total_searches)) if git._total_searches: print ('idx: %d objects searched in %d steps: avg %.3f steps/object' % (git._total_searches, git._total_steps, git._total_steps*1.0/git._total_searches)) print 'Total time: %.3fs' % (time.time() - start) bup-0.29/cmd/meta-cmd.py000077500000000000000000000126241303127641400150470ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble # Copyright (C) 2010 Rob Browning # # This code is covered under the terms of the GNU Library General # Public License as described in the bup LICENSE file. # TODO: Add tar-like -C option. import sys from bup import metadata from bup import options from bup.helpers import handle_ctrl_c, log, saved_errors def open_input(name): if not name or name == '-': return sys.stdin return open(name, 'r') def open_output(name): if not name or name == '-': return sys.stdout return open(name, 'w') optspec = """ bup meta --create [OPTION ...] bup meta --list [OPTION ...] bup meta --extract [OPTION ...] bup meta --start-extract [OPTION ...] bup meta --finish-extract [OPTION ...] bup meta --edit [OPTION ...] -- c,create write metadata for PATHs to stdout (or --file) t,list display metadata x,extract perform --start-extract followed by --finish-extract start-extract build tree matching metadata provided on standard input (or --file) finish-extract finish applying standard input (or --file) metadata to filesystem edit alter metadata; write to stdout (or --file) f,file= specify source or destination file R,recurse recurse into subdirectories xdev,one-file-system don't cross filesystem boundaries numeric-ids apply numeric IDs (user, group, etc.) rather than names symlinks handle symbolic links (default is true) paths include paths in metadata (default is true) set-uid= set metadata uid (via --edit) set-gid= set metadata gid (via --edit) set-user= set metadata user (via --edit) unset-user remove metadata user (via --edit) set-group= set metadata group (via --edit) unset-group remove metadata group (via --edit) v,verbose increase log output (can be used more than once) q,quiet don't show progress meter """ handle_ctrl_c() o = options.Options(optspec) (opt, flags, remainder) = o.parse(['--paths', '--symlinks', '--recurse'] + sys.argv[1:]) opt.verbose = opt.verbose or 0 opt.quiet = opt.quiet or 0 metadata.verbose = opt.verbose - opt.quiet action_count = sum([bool(x) for x in [opt.create, opt.list, opt.extract, opt.start_extract, opt.finish_extract, opt.edit]]) if action_count > 1: o.fatal("bup: only one action permitted: --create --list --extract --edit") if action_count == 0: o.fatal("bup: no action specified") if opt.create: if len(remainder) < 1: o.fatal("no paths specified for create") output_file = open_output(opt.file) metadata.save_tree(output_file, remainder, recurse=opt.recurse, write_paths=opt.paths, save_symlinks=opt.symlinks, xdev=opt.xdev) elif opt.list: if len(remainder) > 0: o.fatal("cannot specify paths for --list") src = open_input(opt.file) metadata.display_archive(src) elif opt.start_extract: if len(remainder) > 0: o.fatal("cannot specify paths for --start-extract") src = open_input(opt.file) metadata.start_extract(src, create_symlinks=opt.symlinks) elif opt.finish_extract: if len(remainder) > 0: o.fatal("cannot specify paths for --finish-extract") src = open_input(opt.file) metadata.finish_extract(src, restore_numeric_ids=opt.numeric_ids) elif opt.extract: if len(remainder) > 0: o.fatal("cannot specify paths for --extract") src = open_input(opt.file) metadata.extract(src, restore_numeric_ids=opt.numeric_ids, create_symlinks=opt.symlinks) elif opt.edit: if len(remainder) < 1: o.fatal("no paths specified for edit") output_file = open_output(opt.file) unset_user = False # True if --unset-user was the last relevant option. unset_group = False # True if --unset-group was the last relevant option. for flag in flags: if flag[0] == '--set-user': unset_user = False elif flag[0] == '--unset-user': unset_user = True elif flag[0] == '--set-group': unset_group = False elif flag[0] == '--unset-group': unset_group = True for path in remainder: f = open(path, 'r') try: for m in metadata._ArchiveIterator(f): if opt.set_uid is not None: try: m.uid = int(opt.set_uid) except ValueError: o.fatal("uid must be an integer") if opt.set_gid is not None: try: m.gid = int(opt.set_gid) except ValueError: o.fatal("gid must be an integer") if unset_user: m.user = '' elif opt.set_user is not None: m.user = opt.set_user if unset_group: m.group = '' elif opt.set_group is not None: m.group = opt.set_group m.write(output_file) finally: f.close() if saved_errors: log('WARNING: %d errors encountered.\n' % len(saved_errors)) sys.exit(1) else: sys.exit(0) bup-0.29/cmd/midx-cmd.py000077500000000000000000000221241303127641400150560ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import glob, math, os, resource, struct, sys, tempfile from bup import options, git, midx, _helpers, xstat from bup.helpers import (Sha1, add_error, atomically_replaced_file, debug1, fdatasync, handle_ctrl_c, log, mmap_readwrite, qprogress, saved_errors, unlink) PAGE_SIZE=4096 SHA_PER_PAGE=PAGE_SIZE/20. optspec = """ bup midx [options...] -- o,output= output midx filename (default: auto-generated) a,auto automatically use all existing .midx/.idx files as input f,force merge produce exactly one .midx containing all objects p,print print names of generated midx files check validate contents of the given midx files (with -a, all midx files) max-files= maximum number of idx files to open at once [-1] d,dir= directory containing idx/midx files """ merge_into = _helpers.merge_into def _group(l, count): for i in xrange(0, len(l), count): yield l[i:i+count] def max_files(): mf = min(resource.getrlimit(resource.RLIMIT_NOFILE)) if mf > 32: mf -= 20 # just a safety margin else: mf -= 6 # minimum safety margin return mf def check_midx(name): nicename = git.repo_rel(name) log('Checking %s.\n' % nicename) try: ix = git.open_idx(name) except git.GitError as e: add_error('%s: %s' % (name, e)) return for count,subname in enumerate(ix.idxnames): sub = git.open_idx(os.path.join(os.path.dirname(name), subname)) for ecount,e in enumerate(sub): if not (ecount % 1234): qprogress(' %d/%d: %s %d/%d\r' % (count, len(ix.idxnames), git.shorten_hash(subname), ecount, len(sub))) if not sub.exists(e): add_error("%s: %s: %s missing from idx" % (nicename, git.shorten_hash(subname), str(e).encode('hex'))) if not ix.exists(e): add_error("%s: %s: %s missing from midx" % (nicename, git.shorten_hash(subname), str(e).encode('hex'))) prev = None for ecount,e in enumerate(ix): if not (ecount % 1234): qprogress(' Ordering: %d/%d\r' % (ecount, len(ix))) if not e >= prev: add_error('%s: ordering error: %s < %s' % (nicename, str(e).encode('hex'), str(prev).encode('hex'))) prev = e _first = None def _do_midx(outdir, outfilename, infilenames, prefixstr): global _first if not outfilename: assert(outdir) sum = Sha1('\0'.join(infilenames)).hexdigest() outfilename = '%s/midx-%s.midx' % (outdir, sum) inp = [] total = 0 allfilenames = [] midxs = [] try: for name in infilenames: ix = git.open_idx(name) midxs.append(ix) inp.append(( ix.map, len(ix), ix.sha_ofs, isinstance(ix, midx.PackMidx) and ix.which_ofs or 0, len(allfilenames), )) for n in ix.idxnames: allfilenames.append(os.path.basename(n)) total += len(ix) inp.sort(lambda x,y: cmp(str(y[0][y[2]:y[2]+20]),str(x[0][x[2]:x[2]+20]))) if not _first: _first = outdir dirprefix = (_first != outdir) and git.repo_rel(outdir)+': ' or '' debug1('midx: %s%screating from %d files (%d objects).\n' % (dirprefix, prefixstr, len(infilenames), total)) if (opt.auto and (total < 1024 and len(infilenames) < 3)) \ or ((opt.auto or opt.force) and len(infilenames) < 2) \ or (opt.force and not total): debug1('midx: nothing to do.\n') return pages = int(total/SHA_PER_PAGE) or 1 bits = int(math.ceil(math.log(pages, 2))) entries = 2**bits debug1('midx: table size: %d (%d bits)\n' % (entries*4, bits)) unlink(outfilename) with atomically_replaced_file(outfilename, 'wb') as f: f.write('MIDX') f.write(struct.pack('!II', midx.MIDX_VERSION, bits)) assert(f.tell() == 12) f.truncate(12 + 4*entries + 20*total + 4*total) f.flush() fdatasync(f.fileno()) fmap = mmap_readwrite(f, close=False) count = merge_into(fmap, bits, total, inp) del fmap # Assume this calls msync() now. f.seek(0, os.SEEK_END) f.write('\0'.join(allfilenames)) finally: for ix in midxs: if isinstance(ix, midx.PackMidx): ix.close() midxs = None inp = None # This is just for testing (if you enable this, don't clear inp above) if 0: p = midx.PackMidx(outfilename) assert(len(p.idxnames) == len(infilenames)) print p.idxnames assert(len(p) == total) for pe, e in p, git.idxmerge(inp, final_progress=False): pin = pi.next() assert(i == pin) assert(p.exists(i)) return total, outfilename def do_midx(outdir, outfilename, infilenames, prefixstr): rv = _do_midx(outdir, outfilename, infilenames, prefixstr) if rv and opt['print']: print rv[1] def do_midx_dir(path, outfilename): already = {} sizes = {} if opt.force and not opt.auto: midxs = [] # don't use existing midx files else: midxs = glob.glob('%s/*.midx' % path) contents = {} for mname in midxs: m = git.open_idx(mname) contents[mname] = [('%s/%s' % (path,i)) for i in m.idxnames] sizes[mname] = len(m) # sort the biggest+newest midxes first, so that we can eliminate # smaller (or older) redundant ones that come later in the list midxs.sort(key=lambda ix: (-sizes[ix], -xstat.stat(ix).st_mtime)) for mname in midxs: any = 0 for iname in contents[mname]: if not already.get(iname): already[iname] = 1 any = 1 if not any: debug1('%r is redundant\n' % mname) unlink(mname) already[mname] = 1 midxs = [k for k in midxs if not already.get(k)] idxs = [k for k in glob.glob('%s/*.idx' % path) if not already.get(k)] for iname in idxs: i = git.open_idx(iname) sizes[iname] = len(i) all = [(sizes[n],n) for n in (midxs + idxs)] # FIXME: what are the optimal values? Does this make sense? DESIRED_HWM = opt.force and 1 or 5 DESIRED_LWM = opt.force and 1 or 2 existed = dict((name,1) for sz,name in all) debug1('midx: %d indexes; want no more than %d.\n' % (len(all), DESIRED_HWM)) if len(all) <= DESIRED_HWM: debug1('midx: nothing to do.\n') while len(all) > DESIRED_HWM: all.sort() part1 = [name for sz,name in all[:len(all)-DESIRED_LWM+1]] part2 = all[len(all)-DESIRED_LWM+1:] all = list(do_midx_group(path, outfilename, part1)) + part2 if len(all) > DESIRED_HWM: debug1('\nStill too many indexes (%d > %d). Merging again.\n' % (len(all), DESIRED_HWM)) if opt['print']: for sz,name in all: if not existed.get(name): print name def do_midx_group(outdir, outfilename, infiles): groups = list(_group(infiles, opt.max_files)) gprefix = '' for n,sublist in enumerate(groups): if len(groups) != 1: gprefix = 'Group %d: ' % (n+1) rv = _do_midx(outdir, outfilename, sublist, gprefix) if rv: yield rv handle_ctrl_c() o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra and (opt.auto or opt.force): o.fatal("you can't use -f/-a and also provide filenames") if opt.check and (not extra and not opt.auto): o.fatal("if using --check, you must provide filenames or -a") git.check_repo_or_die() if opt.max_files < 0: opt.max_files = max_files() assert(opt.max_files >= 5) if opt.check: # check existing midx files if extra: midxes = extra else: midxes = [] paths = opt.dir and [opt.dir] or git.all_packdirs() for path in paths: debug1('midx: scanning %s\n' % path) midxes += glob.glob(os.path.join(path, '*.midx')) for name in midxes: check_midx(name) if not saved_errors: log('All tests passed.\n') else: if extra: do_midx(git.repo('objects/pack'), opt.output, extra, '') elif opt.auto or opt.force: paths = opt.dir and [opt.dir] or git.all_packdirs() for path in paths: debug1('midx: scanning %s\n' % path) do_midx_dir(path, opt.output) else: o.fatal("you must use -f or -a or provide input filenames") if saved_errors: log('WARNING: %d errors encountered.\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/mux-cmd.py000077500000000000000000000021541303127641400147270ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import os, sys, subprocess, struct from bup import options from bup.helpers import debug1, debug2, mux # Give the subcommand exclusive access to stdin. orig_stdin = os.dup(0) devnull = os.open('/dev/null', os.O_RDONLY) os.dup2(devnull, 0) os.close(devnull) optspec = """ bup mux command [arguments...] -- """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) < 1: o.fatal('command is required') subcmd = extra debug2('bup mux: starting %r\n' % (extra,)) outr, outw = os.pipe() errr, errw = os.pipe() def close_fds(): os.close(outr) os.close(errr) p = subprocess.Popen(subcmd, stdin=orig_stdin, stdout=outw, stderr=errw, preexec_fn=close_fds) os.close(outw) os.close(errw) sys.stdout.write('BUPMUX') sys.stdout.flush() mux(p, sys.stdout.fileno(), outr, errr) os.close(outr) os.close(errr) prv = p.wait() if prv: debug1('%s exited with code %d\n' % (extra[0], prv)) debug1('bup mux: done\n') sys.exit(prv) bup-0.29/cmd/newliner-cmd.py000077500000000000000000000023601303127641400157400ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, re from bup import options from bup import _helpers # fixes up sys.argv on import optspec = """ bup newliner """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra: o.fatal("no arguments expected") r = re.compile(r'([\r\n])') lastlen = 0 all = '' width = options._tty_width() or 78 while 1: l = r.split(all, 1) if len(l) <= 1: if len(all) >= 160: sys.stdout.write('%s\n' % all[:78]) sys.stdout.flush() all = all[78:] try: b = os.read(sys.stdin.fileno(), 4096) except KeyboardInterrupt: break if not b: break all += b else: assert(len(l) == 3) (line, splitchar, all) = l if splitchar == '\r': line = line[:width] sys.stdout.write('%-*s%s' % (lastlen, line, splitchar)) if splitchar == '\r': lastlen = len(line) else: lastlen = 0 sys.stdout.flush() if lastlen: sys.stdout.write('%-*s\r' % (lastlen, '')) if all: sys.stdout.write('%s\n' % all) bup-0.29/cmd/on--server-cmd.py000077500000000000000000000035731303127641400161210ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, struct from bup import options, helpers optspec = """ bup on--server -- This command is run automatically by 'bup on' """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra: o.fatal('no arguments expected') # get the subcommand's argv. # Normally we could just pass this on the command line, but since we'll often # be getting called on the other end of an ssh pipe, which tends to mangle # argv (by sending it via the shell), this way is much safer. buf = sys.stdin.read(4) sz = struct.unpack('!I', buf)[0] assert(sz > 0) assert(sz < 1000000) buf = sys.stdin.read(sz) assert(len(buf) == sz) argv = buf.split('\0') argv = [argv[0], 'mux', '--'] + argv # stdin/stdout are supposedly connected to 'bup server' that the caller # started for us (often on the other end of an ssh tunnel), so we don't want # to misuse them. Move them out of the way, then replace stdout with # a pointer to stderr in case our subcommand wants to do something with it. # # It might be nice to do the same with stdin, but my experiments showed that # ssh seems to make its child's stderr a readable-but-never-reads-anything # socket. They really should have used shutdown(SHUT_WR) on the other end # of it, but probably didn't. Anyway, it's too messy, so let's just make sure # anyone reading from stdin is disappointed. # # (You can't just leave stdin/stdout "not open" by closing the file # descriptors. Then the next file that opens is automatically assigned 0 or 1, # and people *trying* to read/write stdin/stdout get screwed.) os.dup2(0, 3) os.dup2(1, 4) os.dup2(2, 1) fd = os.open('/dev/null', os.O_RDONLY) os.dup2(fd, 0) os.close(fd) os.environ['BUP_SERVER_REVERSE'] = helpers.hostname() os.execvp(argv[0], argv) sys.exit(99) bup-0.29/cmd/on-cmd.py000077500000000000000000000041451303127641400145340ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, struct, getopt, subprocess, signal from subprocess import PIPE from bup import options, ssh, path from bup.helpers import DemuxConn, log optspec = """ bup on index ... bup on save ... bup on split ... """ o = options.Options(optspec, optfunc=getopt.getopt) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) < 2: o.fatal('arguments expected') class SigException(Exception): def __init__(self, signum): self.signum = signum Exception.__init__(self, 'signal %d received' % signum) def handler(signum, frame): raise SigException(signum) signal.signal(signal.SIGTERM, handler) signal.signal(signal.SIGINT, handler) try: sp = None p = None ret = 99 hp = extra[0].split(':') if len(hp) == 1: (hostname, port) = (hp[0], None) else: (hostname, port) = hp argv = extra[1:] p = ssh.connect(hostname, port, 'on--server', stderr=PIPE) try: argvs = '\0'.join(['bup'] + argv) p.stdin.write(struct.pack('!I', len(argvs)) + argvs) p.stdin.flush() sp = subprocess.Popen([path.exe(), 'server'], stdin=p.stdout, stdout=p.stdin) p.stdin.close() p.stdout.close() # Demultiplex remote client's stderr (back to stdout/stderr). dmc = DemuxConn(p.stderr.fileno(), open(os.devnull, "w")) for line in iter(dmc.readline, ""): sys.stdout.write(line) finally: while 1: # if we get a signal while waiting, we have to keep waiting, just # in case our child doesn't die. try: ret = p.wait() if sp: sp.wait() break except SigException as e: log('\nbup on: %s\n' % e) os.kill(p.pid, e.signum) ret = 84 except SigException as e: if ret == 0: ret = 99 log('\nbup on: %s\n' % e) sys.exit(ret) bup-0.29/cmd/prune-older-cmd.py000077500000000000000000000125271303127641400163570ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble from __future__ import print_function from collections import defaultdict from itertools import groupby from sys import stderr from time import localtime, strftime, time import re, sys from bup import git, options from bup.gc import bup_gc from bup.helpers import die_if_errors, log, partition, period_as_secs from bup.rm import bup_rm def branches(refnames=()): return ((name[11:], sha) for (name,sha) in git.list_refs(refnames=('refs/heads/' + n for n in refnames), limit_to_heads=True)) def save_name(branch, utc): return branch + '/' + strftime('%Y-%m-%d-%H%M%S', localtime(utc)) def classify_saves(saves, period_start): """For each (utc, id) in saves, yield (True, (utc, id)) if the save should be kept and (False, (utc, id)) if the save should be removed. The ids are binary hashes. """ def retain_oldest_in_region(region): prev = None for save in region: if prev: yield False, prev prev = save if prev: yield True, prev matches, rest = partition(lambda s: s[0] >= period_start['all'], saves) for save in matches: yield True, save tm_ranges = ((period_start['dailies'], lambda s: localtime(s[0]).tm_yday), (period_start['monthlies'], lambda s: localtime(s[0]).tm_mon), (period_start['yearlies'], lambda s: localtime(s[0]).tm_year)) for pstart, time_region_id in tm_ranges: matches, rest = partition(lambda s: s[0] >= pstart, rest) for region_id, region_saves in groupby(matches, time_region_id): for action in retain_oldest_in_region(region_saves): yield action for save in rest: yield False, save optspec = """ bup prune-older [options...] [BRANCH...] -- keep-all-for= retain all saves within the PERIOD keep-dailies-for= retain the oldest save per day within the PERIOD keep-monthlies-for= retain the oldest save per month within the PERIOD keep-yearlies-for= retain the oldest save per year within the PERIOD wrt= end all periods at this number of seconds since the epoch pretend don't prune, just report intended actions to standard output gc collect garbage after removals [1] gc-threshold= only rewrite a packfile if it's over this percent garbage [10] #,compress= set compression level to # (0-9, 9 is highest) [1] v,verbose increase log output (can be used more than once) unsafe use the command even though it may be DANGEROUS """ o = options.Options(optspec) opt, flags, roots = o.parse(sys.argv[1:]) if not opt.unsafe: o.fatal('refusing to run dangerous, experimental command without --unsafe') now = int(time()) if not opt.wrt else opt.wrt if not isinstance(now, (int, long)): o.fatal('--wrt value ' + str(now) + ' is not an integer') period_start = {} for period, extent in (('all', opt.keep_all_for), ('dailies', opt.keep_dailies_for), ('monthlies', opt.keep_monthlies_for), ('yearlies', opt.keep_yearlies_for)): if extent: secs = period_as_secs(extent) if not secs: o.fatal('%r is not a valid period' % extent) period_start[period] = now - secs if not period_start: o.fatal('at least one keep argument is required') period_start = defaultdict(lambda: float('inf'), period_start) if opt.verbose: epoch_ymd = strftime('%Y-%m-%d-%H%M%S', localtime(0)) for kind in ['all', 'dailies', 'monthlies', 'yearlies']: period_utc = period_start[kind] if period_utc != float('inf'): if not (period_utc > float('-inf')): log('keeping all ' + kind) else: try: when = strftime('%Y-%m-%d-%H%M%S', localtime(period_utc)) log('keeping ' + kind + ' since ' + when + '\n') except ValueError as ex: if period_utc < 0: log('keeping %s since %d seconds before %s\n' %(kind, abs(period_utc), epoch_ymd)) elif period_utc > 0: log('keeping %s since %d seconds after %s\n' %(kind, period_utc, epoch_ymd)) else: log('keeping %s since %s\n' % (kind, epoch_ymd)) git.check_repo_or_die() # This could be more efficient, but for now just build the whole list # in memory and let bup_rm() do some redundant work. removals = [] for branch, branch_id in branches(roots): die_if_errors() saves = git.rev_list(branch_id.encode('hex')) for keep_save, (utc, id) in classify_saves(saves, period_start): assert(keep_save in (False, True)) # FIXME: base removals on hashes if opt.pretend: print('+' if keep_save else '-', save_name(branch, utc)) elif not keep_save: removals.append(save_name(branch, utc)) if not opt.pretend: die_if_errors() bup_rm(removals, compression=opt.compress, verbosity=opt.verbose) if opt.gc: die_if_errors() bup_gc(threshold=opt.gc_threshold, compression=opt.compress, verbosity=opt.verbose) die_if_errors() bup-0.29/cmd/random-cmd.py000077500000000000000000000016631303127641400154020ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import os, sys from bup import options, _helpers from bup.helpers import atoi, handle_ctrl_c, log, parse_num optspec = """ bup random [-S seed] -- S,seed= optional random number seed [1] f,force print random data to stdout even if it's a tty v,verbose print byte counter to stderr """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) != 1: o.fatal("exactly one argument expected") total = parse_num(extra[0]) handle_ctrl_c() if opt.force or (not os.isatty(1) and not atoi(os.environ.get('BUP_FORCE_TTY')) & 1): _helpers.write_random(sys.stdout.fileno(), total, opt.seed, opt.verbose and 1 or 0) else: log('error: not writing binary data to a terminal. Use -f to force.\n') sys.exit(1) bup-0.29/cmd/restore-cmd.py000077500000000000000000000303671303127641400156100ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import copy, errno, os, sys, stat, re from bup import options, git, metadata, vfs from bup._helpers import write_sparsely from bup.helpers import (add_error, chunkyreader, handle_ctrl_c, log, mkdirp, parse_rx_excludes, progress, qprogress, saved_errors, should_rx_exclude_path, unlink) optspec = """ bup restore [-C outdir] -- C,outdir= change to given outdir before extracting files numeric-ids restore numeric IDs (user, group, etc.) rather than names exclude-rx= skip paths matching the unanchored regex (may be repeated) exclude-rx-from= skip --exclude-rx patterns in file (may be repeated) sparse create sparse files v,verbose increase log output (can be used more than once) map-user= given OLD=NEW, restore OLD user as NEW user map-group= given OLD=NEW, restore OLD group as NEW group map-uid= given OLD=NEW, restore OLD uid as NEW uid map-gid= given OLD=NEW, restore OLD gid as NEW gid q,quiet don't show progress meter """ total_restored = 0 # stdout should be flushed after each line, even when not connected to a tty sys.stdout.flush() sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 1) def verbose1(s): if opt.verbose >= 1: print s def verbose2(s): if opt.verbose >= 2: print s def plog(s): if opt.quiet: return qprogress(s) def valid_restore_path(path): path = os.path.normpath(path) if path.startswith('/'): path = path[1:] if '/' in path: return True def print_info(n, fullname): if stat.S_ISDIR(n.mode): verbose1('%s/' % fullname) elif stat.S_ISLNK(n.mode): verbose2('%s@ -> %s' % (fullname, n.readlink())) else: verbose2(fullname) def create_path(n, fullname, meta): if meta: meta.create_path(fullname) else: # These fallbacks are important -- meta could be null if, for # example, save created a "fake" item, i.e. a new strip/graft # path element, etc. You can find cases like that by # searching for "Metadata()". unlink(fullname) if stat.S_ISDIR(n.mode): mkdirp(fullname) elif stat.S_ISLNK(n.mode): os.symlink(n.readlink(), fullname) def parse_owner_mappings(type, options, fatal): """Traverse the options and parse all --map-TYPEs, or call Option.fatal().""" opt_name = '--map-' + type value_rx = r'^([^=]+)=([^=]*)$' if type in ('uid', 'gid'): value_rx = r'^(-?[0-9]+)=(-?[0-9]+)$' owner_map = {} for flag in options: (option, parameter) = flag if option != opt_name: continue match = re.match(value_rx, parameter) if not match: raise fatal("couldn't parse %s as %s mapping" % (parameter, type)) old_id, new_id = match.groups() if type in ('uid', 'gid'): old_id = int(old_id) new_id = int(new_id) owner_map[old_id] = new_id return owner_map def apply_metadata(meta, name, restore_numeric_ids, owner_map): m = copy.deepcopy(meta) m.user = owner_map['user'].get(m.user, m.user) m.group = owner_map['group'].get(m.group, m.group) m.uid = owner_map['uid'].get(m.uid, m.uid) m.gid = owner_map['gid'].get(m.gid, m.gid) m.apply_to_path(name, restore_numeric_ids = restore_numeric_ids) # Track a list of (restore_path, vfs_path, meta) triples for each path # we've written for a given hardlink_target. This allows us to handle # the case where we restore a set of hardlinks out of order (with # respect to the original save call(s)) -- i.e. when we don't restore # the hardlink_target path first. This data also allows us to attempt # to handle other situations like hardlink sets that change on disk # during a save, or between index and save. targets_written = {} def hardlink_compatible(target_path, target_vfs_path, target_meta, src_node, src_meta): global top if not os.path.exists(target_path): return False target_node = top.lresolve(target_vfs_path) if src_node.mode != target_node.mode \ or src_node.mtime != target_node.mtime \ or src_node.ctime != target_node.ctime \ or src_node.hash != target_node.hash: return False if not src_meta.same_file(target_meta): return False return True def hardlink_if_possible(fullname, node, meta): """Find a suitable hardlink target, link to it, and return true, otherwise return false.""" # Expect the caller to handle restoring the metadata if # hardlinking isn't possible. global targets_written target = meta.hardlink_target target_versions = targets_written.get(target) if target_versions: # Check every path in the set that we've written so far for a match. for (target_path, target_vfs_path, target_meta) in target_versions: if hardlink_compatible(target_path, target_vfs_path, target_meta, node, meta): try: os.link(target_path, fullname) return True except OSError as e: if e.errno != errno.EXDEV: raise else: target_versions = [] targets_written[target] = target_versions full_vfs_path = node.fullname() target_versions.append((fullname, full_vfs_path, meta)) return False def write_file_content(fullname, n): outf = open(fullname, 'wb') try: for b in chunkyreader(n.open()): outf.write(b) finally: outf.close() def write_file_content_sparsely(fullname, n): outfd = os.open(fullname, os.O_WRONLY | os.O_CREAT | os.O_TRUNC, 0o600) try: trailing_zeros = 0; for b in chunkyreader(n.open()): trailing_zeros = write_sparsely(outfd, b, 512, trailing_zeros) pos = os.lseek(outfd, trailing_zeros, os.SEEK_END) os.ftruncate(outfd, pos) finally: os.close(outfd) def find_dir_item_metadata_by_name(dir, name): """Find metadata in dir (a node) for an item with the given name, or for the directory itself if the name is ''.""" meta_stream = None try: mfile = dir.metadata_file() # VFS file -- cannot close(). if mfile: meta_stream = mfile.open() # First entry is for the dir itself. meta = metadata.Metadata.read(meta_stream) if name == '': return meta for sub in dir: if stat.S_ISDIR(sub.mode): meta = find_dir_item_metadata_by_name(sub, '') else: meta = metadata.Metadata.read(meta_stream) if sub.name == name: return meta finally: if meta_stream: meta_stream.close() def do_root(n, sparse, owner_map, restore_root_meta = True): # Very similar to do_node(), except that this function doesn't # create a path for n's destination directory (and so ignores # n.fullname). It assumes the destination is '.', and restores # n's metadata and content there. global total_restored, opt meta_stream = None try: # Directory metadata is the first entry in any .bupm file in # the directory. Get it. mfile = n.metadata_file() # VFS file -- cannot close(). root_meta = None if mfile: meta_stream = mfile.open() root_meta = metadata.Metadata.read(meta_stream) print_info(n, '.') total_restored += 1 plog('Restoring: %d\r' % total_restored) for sub in n: m = None # Don't get metadata if this is a dir -- handled in sub do_node(). if meta_stream and not stat.S_ISDIR(sub.mode): m = metadata.Metadata.read(meta_stream) do_node(n, sub, sparse, owner_map, meta = m) if root_meta and restore_root_meta: apply_metadata(root_meta, '.', opt.numeric_ids, owner_map) finally: if meta_stream: meta_stream.close() def do_node(top, n, sparse, owner_map, meta = None): # Create n.fullname(), relative to the current directory, and # restore all of its metadata, when available. The meta argument # will be None for dirs, or when there is no .bupm (i.e. no # metadata). global total_restored, opt meta_stream = None write_content = sparse and write_file_content_sparsely or write_file_content try: fullname = n.fullname(stop_at=top) # Match behavior of index --exclude-rx with respect to paths. exclude_candidate = '/' + fullname if(stat.S_ISDIR(n.mode)): exclude_candidate += '/' if should_rx_exclude_path(exclude_candidate, exclude_rxs): return # If this is a directory, its metadata is the first entry in # any .bupm file inside the directory. Get it. if(stat.S_ISDIR(n.mode)): mfile = n.metadata_file() # VFS file -- cannot close(). if mfile: meta_stream = mfile.open() meta = metadata.Metadata.read(meta_stream) print_info(n, fullname) created_hardlink = False if meta and meta.hardlink_target: created_hardlink = hardlink_if_possible(fullname, n, meta) if not created_hardlink: create_path(n, fullname, meta) if meta: if stat.S_ISREG(meta.mode): write_content(fullname, n) elif stat.S_ISREG(n.mode): write_content(fullname, n) total_restored += 1 plog('Restoring: %d\r' % total_restored) for sub in n: m = None # Don't get metadata if this is a dir -- handled in sub do_node(). if meta_stream and not stat.S_ISDIR(sub.mode): m = metadata.Metadata.read(meta_stream) do_node(top, sub, sparse, owner_map, meta = m) if meta and not created_hardlink: apply_metadata(meta, fullname, opt.numeric_ids, owner_map) finally: if meta_stream: meta_stream.close() n.release() handle_ctrl_c() o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) git.check_repo_or_die() top = vfs.RefList(None) if not extra: o.fatal('must specify at least one filename to restore') exclude_rxs = parse_rx_excludes(flags, o.fatal) owner_map = {} for map_type in ('user', 'group', 'uid', 'gid'): owner_map[map_type] = parse_owner_mappings(map_type, flags, o.fatal) if opt.outdir: mkdirp(opt.outdir) os.chdir(opt.outdir) ret = 0 for d in extra: if not valid_restore_path(d): add_error("ERROR: path %r doesn't include a branch and revision" % d) continue path,name = os.path.split(d) try: n = top.lresolve(d) except vfs.NodeError as e: add_error(e) continue isdir = stat.S_ISDIR(n.mode) if not name or name == '.': # Source is /foo/what/ever/ or /foo/what/ever/. -- extract # what/ever/* to the current directory, and if name == '.' # (i.e. /foo/what/ever/.), then also restore what/ever's # metadata to the current directory. if not isdir: add_error('%r: not a directory' % d) else: do_root(n, opt.sparse, owner_map, restore_root_meta = (name == '.')) else: # Source is /foo/what/ever -- extract ./ever to cwd. if isinstance(n, vfs.FakeSymlink): # Source is actually /foo/what, i.e. a top-level commit # like /foo/latest, which is a symlink to ../.commit/SHA. # So dereference it, and restore ../.commit/SHA/. to # "./what/.". target = n.dereference() mkdirp(n.name) os.chdir(n.name) do_root(target, opt.sparse, owner_map) else: # Not a directory or fake symlink. meta = find_dir_item_metadata_by_name(n.parent, n.name) do_node(n.parent, n, opt.sparse, owner_map, meta = meta) if not opt.quiet: progress('Restoring: %d, done.\n' % total_restored) if saved_errors: log('WARNING: %d errors encountered while restoring.\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/rm-cmd.py000077500000000000000000000015611303127641400145350ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys from bup.git import check_repo_or_die from bup.options import Options from bup.helpers import die_if_errors, handle_ctrl_c, log from bup.rm import bup_rm optspec = """ bup rm -- #,compress= set compression level to # (0-9, 9 is highest) [6] v,verbose increase verbosity (can be specified multiple times) unsafe use the command even though it may be DANGEROUS """ handle_ctrl_c() o = Options(optspec) opt, flags, extra = o.parse(sys.argv[1:]) if not opt.unsafe: o.fatal('refusing to run dangerous, experimental command without --unsafe') if len(extra) < 1: o.fatal('no paths specified') check_repo_or_die() bup_rm(extra, compression=opt.compress, verbosity=opt.verbose) die_if_errors() bup-0.29/cmd/save-cmd.py000077500000000000000000000413301303127641400150530ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble from errno import EACCES from io import BytesIO import os, sys, stat, time, math from bup import hashsplit, git, options, index, client, metadata, hlinkdb from bup.hashsplit import GIT_MODE_TREE, GIT_MODE_FILE, GIT_MODE_SYMLINK from bup.helpers import (add_error, grafted_path_components, handle_ctrl_c, hostname, istty2, log, parse_date_or_fatal, parse_num, path_components, progress, qprogress, resolve_parent, saved_errors, stripped_path_components, userfullname, username, valid_save_name) optspec = """ bup save [-tc] [-n name] -- r,remote= hostname:/path/to/repo of remote repository t,tree output a tree id c,commit output a commit id n,name= name of backup set to update (if any) d,date= date for the commit (seconds since the epoch) v,verbose increase log output (can be used more than once) q,quiet don't show progress meter smaller= only back up files smaller than n bytes bwlimit= maximum bytes/sec to transmit to server f,indexfile= the name of the index file (normally BUP_DIR/bupindex) strip strips the path to every filename given strip-path= path-prefix to be stripped when saving graft= a graft point *old_path*=*new_path* (can be used more than once) #,compress= set compression level to # (0-9, 9 is highest) [1] """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) git.check_repo_or_die() if not (opt.tree or opt.commit or opt.name): o.fatal("use one or more of -t, -c, -n") if not extra: o.fatal("no filenames given") opt.progress = (istty2 and not opt.quiet) opt.smaller = parse_num(opt.smaller or 0) if opt.bwlimit: client.bwlimit = parse_num(opt.bwlimit) if opt.date: date = parse_date_or_fatal(opt.date, o.fatal) else: date = time.time() if opt.strip and opt.strip_path: o.fatal("--strip is incompatible with --strip-path") graft_points = [] if opt.graft: if opt.strip: o.fatal("--strip is incompatible with --graft") if opt.strip_path: o.fatal("--strip-path is incompatible with --graft") for (option, parameter) in flags: if option == "--graft": splitted_parameter = parameter.split('=') if len(splitted_parameter) != 2: o.fatal("a graft point must be of the form old_path=new_path") old_path, new_path = splitted_parameter if not (old_path and new_path): o.fatal("a graft point cannot be empty") graft_points.append((resolve_parent(old_path), resolve_parent(new_path))) is_reverse = os.environ.get('BUP_SERVER_REVERSE') if is_reverse and opt.remote: o.fatal("don't use -r in reverse mode; it's automatic") if opt.name and not valid_save_name(opt.name): o.fatal("'%s' is not a valid branch name" % opt.name) refname = opt.name and 'refs/heads/%s' % opt.name or None if opt.remote or is_reverse: try: cli = client.Client(opt.remote) except client.ClientError as e: log('error: %s' % e) sys.exit(1) oldref = refname and cli.read_ref(refname) or None w = cli.new_packwriter(compression_level=opt.compress) else: cli = None oldref = refname and git.read_ref(refname) or None w = git.PackWriter(compression_level=opt.compress) handle_ctrl_c() def eatslash(dir): if dir.endswith('/'): return dir[:-1] else: return dir # Metadata is stored in a file named .bupm in each directory. The # first metadata entry will be the metadata for the current directory. # The remaining entries will be for each of the other directory # elements, in the order they're listed in the index. # # Since the git tree elements are sorted according to # git.shalist_item_sort_key, the metalist items are accumulated as # (sort_key, metadata) tuples, and then sorted when the .bupm file is # created. The sort_key must be computed using the element's real # name and mode rather than the git mode and (possibly mangled) name. # Maintain a stack of information representing the current location in # the archive being constructed. The current path is recorded in # parts, which will be something like ['', 'home', 'someuser'], and # the accumulated content and metadata for of the dirs in parts is # stored in parallel stacks in shalists and metalists. parts = [] # Current archive position (stack of dir names). shalists = [] # Hashes for each dir in paths. metalists = [] # Metadata for each dir in paths. def _push(part, metadata): # Enter a new archive directory -- make it the current directory. parts.append(part) shalists.append([]) metalists.append([('', metadata)]) # This dir's metadata (no name). def _pop(force_tree, dir_metadata=None): # Leave the current archive directory and add its tree to its parent. assert(len(parts) >= 1) part = parts.pop() shalist = shalists.pop() metalist = metalists.pop() if metalist and not force_tree: if dir_metadata: # Override the original metadata pushed for this dir. metalist = [('', dir_metadata)] + metalist[1:] sorted_metalist = sorted(metalist, key = lambda x : x[0]) metadata = ''.join([m[1].encode() for m in sorted_metalist]) metadata_f = BytesIO(metadata) mode, id = hashsplit.split_to_blob_or_tree(w.new_blob, w.new_tree, [metadata_f], keep_boundaries=False) shalist.append((mode, '.bupm', id)) # FIXME: only test if collision is possible (i.e. given --strip, etc.)? if force_tree: tree = force_tree else: names_seen = set() clean_list = [] for x in shalist: name = x[1] if name in names_seen: parent_path = '/'.join(parts) + '/' add_error('error: ignoring duplicate path %r in %r' % (name, parent_path)) else: names_seen.add(name) clean_list.append(x) tree = w.new_tree(clean_list) if shalists: shalists[-1].append((GIT_MODE_TREE, git.mangle_name(part, GIT_MODE_TREE, GIT_MODE_TREE), tree)) return tree lastremain = None def progress_report(n): global count, subcount, lastremain subcount += n cc = count + subcount pct = total and (cc*100.0/total) or 0 now = time.time() elapsed = now - tstart kps = elapsed and int(cc/1024./elapsed) kps_frac = 10 ** int(math.log(kps+1, 10) - 1) kps = int(kps/kps_frac)*kps_frac if cc: remain = elapsed*1.0/cc * (total-cc) else: remain = 0.0 if (lastremain and (remain > lastremain) and ((remain - lastremain)/lastremain < 0.05)): remain = lastremain else: lastremain = remain hours = int(remain/60/60) mins = int(remain/60 - hours*60) secs = int(remain - hours*60*60 - mins*60) if elapsed < 30: remainstr = '' kpsstr = '' else: kpsstr = '%dk/s' % kps if hours: remainstr = '%dh%dm' % (hours, mins) elif mins: remainstr = '%dm%d' % (mins, secs) else: remainstr = '%ds' % secs qprogress('Saving: %.2f%% (%d/%dk, %d/%d files) %s %s\r' % (pct, cc/1024, total/1024, fcount, ftotal, remainstr, kpsstr)) indexfile = opt.indexfile or git.repo('bupindex') r = index.Reader(indexfile) try: msr = index.MetaStoreReader(indexfile + '.meta') except IOError as ex: if ex.errno != EACCES: raise log('error: cannot access %r; have you run bup index?' % indexfile) sys.exit(1) hlink_db = hlinkdb.HLinkDB(indexfile + '.hlink') def already_saved(ent): return ent.is_valid() and w.exists(ent.sha) and ent.sha def wantrecurse_pre(ent): return not already_saved(ent) def wantrecurse_during(ent): return not already_saved(ent) or ent.sha_missing() def find_hardlink_target(hlink_db, ent): if hlink_db and not stat.S_ISDIR(ent.mode) and ent.nlink > 1: link_paths = hlink_db.node_paths(ent.dev, ent.ino) if link_paths: return link_paths[0] total = ftotal = 0 if opt.progress: for (transname,ent) in r.filter(extra, wantrecurse=wantrecurse_pre): if not (ftotal % 10024): qprogress('Reading index: %d\r' % ftotal) exists = ent.exists() hashvalid = already_saved(ent) ent.set_sha_missing(not hashvalid) if not opt.smaller or ent.size < opt.smaller: if exists and not hashvalid: total += ent.size ftotal += 1 progress('Reading index: %d, done.\n' % ftotal) hashsplit.progress_callback = progress_report # Root collisions occur when strip or graft options map more than one # path to the same directory (paths which originally had separate # parents). When that situation is detected, use empty metadata for # the parent. Otherwise, use the metadata for the common parent. # Collision example: "bup save ... --strip /foo /foo/bar /bar". # FIXME: Add collision tests, or handle collisions some other way. # FIXME: Detect/handle strip/graft name collisions (other than root), # i.e. if '/foo/bar' and '/bar' both map to '/'. first_root = None root_collision = None tstart = time.time() count = subcount = fcount = 0 lastskip_name = None lastdir = '' for (transname,ent) in r.filter(extra, wantrecurse=wantrecurse_during): (dir, file) = os.path.split(ent.name) exists = (ent.flags & index.IX_EXISTS) hashvalid = already_saved(ent) wasmissing = ent.sha_missing() oldsize = ent.size if opt.verbose: if not exists: status = 'D' elif not hashvalid: if ent.sha == index.EMPTY_SHA: status = 'A' else: status = 'M' else: status = ' ' if opt.verbose >= 2: log('%s %-70s\n' % (status, ent.name)) elif not stat.S_ISDIR(ent.mode) and lastdir != dir: if not lastdir.startswith(dir): log('%s %-70s\n' % (status, os.path.join(dir, ''))) lastdir = dir if opt.progress: progress_report(0) fcount += 1 if not exists: continue if opt.smaller and ent.size >= opt.smaller: if exists and not hashvalid: if opt.verbose: log('skipping large file "%s"\n' % ent.name) lastskip_name = ent.name continue assert(dir.startswith('/')) if opt.strip: dirp = stripped_path_components(dir, extra) elif opt.strip_path: dirp = stripped_path_components(dir, [opt.strip_path]) elif graft_points: dirp = grafted_path_components(graft_points, dir) else: dirp = path_components(dir) # At this point, dirp contains a representation of the archive # path that looks like [(archive_dir_name, real_fs_path), ...]. # So given "bup save ... --strip /foo/bar /foo/bar/baz", dirp # might look like this at some point: # [('', '/foo/bar'), ('baz', '/foo/bar/baz'), ...]. # This dual representation supports stripping/grafting, where the # archive path may not have a direct correspondence with the # filesystem. The root directory is represented by an initial # component named '', and any component that doesn't have a # corresponding filesystem directory (due to grafting, for # example) will have a real_fs_path of None, i.e. [('', None), # ...]. if first_root == None: first_root = dirp[0] elif first_root != dirp[0]: root_collision = True # If switching to a new sub-tree, finish the current sub-tree. while parts > [x[0] for x in dirp]: _pop(force_tree = None) # If switching to a new sub-tree, start a new sub-tree. for path_component in dirp[len(parts):]: dir_name, fs_path = path_component # Not indexed, so just grab the FS metadata or use empty metadata. try: meta = metadata.from_path(fs_path) if fs_path else metadata.Metadata() except (OSError, IOError) as e: add_error(e) lastskip_name = dir_name meta = metadata.Metadata() _push(dir_name, meta) if not file: if len(parts) == 1: continue # We're at the top level -- keep the current root dir # Since there's no filename, this is a subdir -- finish it. oldtree = already_saved(ent) # may be None newtree = _pop(force_tree = oldtree) if not oldtree: if lastskip_name and lastskip_name.startswith(ent.name): ent.invalidate() else: ent.validate(GIT_MODE_TREE, newtree) ent.repack() if exists and wasmissing: count += oldsize continue # it's not a directory id = None if hashvalid: id = ent.sha git_name = git.mangle_name(file, ent.mode, ent.gitmode) git_info = (ent.gitmode, git_name, id) shalists[-1].append(git_info) sort_key = git.shalist_item_sort_key((ent.mode, file, id)) meta = msr.metadata_at(ent.meta_ofs) meta.hardlink_target = find_hardlink_target(hlink_db, ent) # Restore the times that were cleared to 0 in the metastore. (meta.atime, meta.mtime, meta.ctime) = (ent.atime, ent.mtime, ent.ctime) metalists[-1].append((sort_key, meta)) else: if stat.S_ISREG(ent.mode): try: f = hashsplit.open_noatime(ent.name) except (IOError, OSError) as e: add_error(e) lastskip_name = ent.name else: try: (mode, id) = hashsplit.split_to_blob_or_tree( w.new_blob, w.new_tree, [f], keep_boundaries=False) except (IOError, OSError) as e: add_error('%s: %s' % (ent.name, e)) lastskip_name = ent.name else: if stat.S_ISDIR(ent.mode): assert(0) # handled above elif stat.S_ISLNK(ent.mode): try: rl = os.readlink(ent.name) except (OSError, IOError) as e: add_error(e) lastskip_name = ent.name else: (mode, id) = (GIT_MODE_SYMLINK, w.new_blob(rl)) else: # Everything else should be fully described by its # metadata, so just record an empty blob, so the paths # in the tree and .bupm will match up. (mode, id) = (GIT_MODE_FILE, w.new_blob("")) if id: ent.validate(mode, id) ent.repack() git_name = git.mangle_name(file, ent.mode, ent.gitmode) git_info = (mode, git_name, id) shalists[-1].append(git_info) sort_key = git.shalist_item_sort_key((ent.mode, file, id)) hlink = find_hardlink_target(hlink_db, ent) try: meta = metadata.from_path(ent.name, hardlink_target=hlink) except (OSError, IOError) as e: add_error(e) lastskip_name = ent.name else: metalists[-1].append((sort_key, meta)) if exists and wasmissing: count += oldsize subcount = 0 if opt.progress: pct = total and count*100.0/total or 100 progress('Saving: %.2f%% (%d/%dk, %d/%d files), done. \n' % (pct, count/1024, total/1024, fcount, ftotal)) while len(parts) > 1: # _pop() all the parts above the root _pop(force_tree = None) assert(len(shalists) == 1) assert(len(metalists) == 1) # Finish the root directory. tree = _pop(force_tree = None, # When there's a collision, use empty metadata for the root. dir_metadata = metadata.Metadata() if root_collision else None) if opt.tree: print tree.encode('hex') if opt.commit or opt.name: msg = 'bup save\n\nGenerated by command:\n%r\n' % sys.argv userline = '%s <%s@%s>' % (userfullname(), username(), hostname()) commit = w.new_commit(tree, oldref, userline, date, None, userline, date, None, msg) if opt.commit: print commit.encode('hex') msr.close() w.close() # must close before we can update the ref if opt.name: if cli: cli.update_ref(refname, commit, oldref) else: git.update_ref(refname, commit, oldref) if cli: cli.close() if saved_errors: log('WARNING: %d errors encountered while saving.\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/server-cmd.py000077500000000000000000000136331303127641400154300ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import os, sys, struct from bup import options, git from bup.helpers import Conn, debug1, debug2, linereader, log suspended_w = None dumb_server_mode = False def do_help(conn, junk): conn.write('Commands:\n %s\n' % '\n '.join(sorted(commands))) conn.ok() def _set_mode(): global dumb_server_mode dumb_server_mode = os.path.exists(git.repo('bup-dumb-server')) debug1('bup server: serving in %s mode\n' % (dumb_server_mode and 'dumb' or 'smart')) def _init_session(reinit_with_new_repopath=None): if reinit_with_new_repopath is None and git.repodir: return git.check_repo_or_die(reinit_with_new_repopath) # OK. we now know the path is a proper repository. Record this path in the # environment so that subprocesses inherit it and know where to operate. os.environ['BUP_DIR'] = git.repodir debug1('bup server: bupdir is %r\n' % git.repodir) _set_mode() def init_dir(conn, arg): git.init_repo(arg) debug1('bup server: bupdir initialized: %r\n' % git.repodir) _init_session(arg) conn.ok() def set_dir(conn, arg): _init_session(arg) conn.ok() def list_indexes(conn, junk): _init_session() suffix = '' if dumb_server_mode: suffix = ' load' for f in os.listdir(git.repo('objects/pack')): if f.endswith('.idx'): conn.write('%s%s\n' % (f, suffix)) conn.ok() def send_index(conn, name): _init_session() assert(name.find('/') < 0) assert(name.endswith('.idx')) idx = git.open_idx(git.repo('objects/pack/%s' % name)) conn.write(struct.pack('!I', len(idx.map))) conn.write(idx.map) conn.ok() def receive_objects_v2(conn, junk): global suspended_w _init_session() suggested = set() if suspended_w: w = suspended_w suspended_w = None else: if dumb_server_mode: w = git.PackWriter(objcache_maker=None) else: w = git.PackWriter() while 1: ns = conn.read(4) if not ns: w.abort() raise Exception('object read: expected length header, got EOF\n') n = struct.unpack('!I', ns)[0] #debug2('expecting %d bytes\n' % n) if not n: debug1('bup server: received %d object%s.\n' % (w.count, w.count!=1 and "s" or '')) fullpath = w.close(run_midx=not dumb_server_mode) if fullpath: (dir, name) = os.path.split(fullpath) conn.write('%s.idx\n' % name) conn.ok() return elif n == 0xffffffff: debug2('bup server: receive-objects suspended.\n') suspended_w = w conn.ok() return shar = conn.read(20) crcr = struct.unpack('!I', conn.read(4))[0] n -= 20 + 4 buf = conn.read(n) # object sizes in bup are reasonably small #debug2('read %d bytes\n' % n) _check(w, n, len(buf), 'object read: expected %d bytes, got %d\n') if not dumb_server_mode: oldpack = w.exists(shar, want_source=True) if oldpack: assert(not oldpack == True) assert(oldpack.endswith('.idx')) (dir,name) = os.path.split(oldpack) if not (name in suggested): debug1("bup server: suggesting index %s\n" % git.shorten_hash(name)) debug1("bup server: because of object %s\n" % shar.encode('hex')) conn.write('index %s\n' % name) suggested.add(name) continue nw, crc = w._raw_write((buf,), sha=shar) _check(w, crcr, crc, 'object read: expected crc %d, got %d\n') # NOTREACHED def _check(w, expected, actual, msg): if expected != actual: w.abort() raise Exception(msg % (expected, actual)) def read_ref(conn, refname): _init_session() r = git.read_ref(refname) conn.write('%s\n' % (r or '').encode('hex')) conn.ok() def update_ref(conn, refname): _init_session() newval = conn.readline().strip() oldval = conn.readline().strip() git.update_ref(refname, newval.decode('hex'), oldval.decode('hex')) conn.ok() cat_pipe = None def cat(conn, id): global cat_pipe _init_session() if not cat_pipe: cat_pipe = git.CatPipe() try: for blob in cat_pipe.join(id): conn.write(struct.pack('!I', len(blob))) conn.write(blob) except KeyError as e: log('server: error: %s\n' % e) conn.write('\0\0\0\0') conn.error(e) else: conn.write('\0\0\0\0') conn.ok() optspec = """ bup server """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra: o.fatal('no arguments expected') debug2('bup server: reading from stdin.\n') commands = { 'quit': None, 'help': do_help, 'init-dir': init_dir, 'set-dir': set_dir, 'list-indexes': list_indexes, 'send-index': send_index, 'receive-objects-v2': receive_objects_v2, 'read-ref': read_ref, 'update-ref': update_ref, 'cat': cat, } # FIXME: this protocol is totally lame and not at all future-proof. # (Especially since we abort completely as soon as *anything* bad happens) conn = Conn(sys.stdin, sys.stdout) lr = linereader(conn) for _line in lr: line = _line.strip() if not line: continue debug1('bup server: command: %r\n' % line) words = line.split(' ', 1) cmd = words[0] rest = len(words)>1 and words[1] or '' if cmd == 'quit': break else: cmd = commands.get(cmd) if cmd: cmd(conn, rest) else: raise Exception('unknown server command: %r\n' % line) debug1('bup server: done\n') bup-0.29/cmd/split-cmd.py000077500000000000000000000165721303127641400152620ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import os, sys, time from bup import hashsplit, git, options, client from bup.helpers import (add_error, handle_ctrl_c, hostname, log, parse_num, qprogress, reprogress, saved_errors, userfullname, username, valid_save_name) optspec = """ bup split [-t] [-c] [-n name] OPTIONS [--git-ids | filenames...] bup split -b OPTIONS [--git-ids | filenames...] bup split <--noop [--copy]|--copy> OPTIONS [--git-ids | filenames...] -- Modes: b,blobs output a series of blob ids. Implies --fanout=0. t,tree output a tree id c,commit output a commit id n,name= save the result under the given name noop split the input, but throw away the result copy split the input, copy it to stdout, don't save to repo Options: r,remote= remote repository path d,date= date for the commit (seconds since the epoch) q,quiet don't print progress messages v,verbose increase log output (can be used more than once) git-ids read a list of git object ids from stdin and split their contents keep-boundaries don't let one chunk span two input files bench print benchmark timings to stderr max-pack-size= maximum bytes in a single pack max-pack-objects= maximum number of objects in a single pack fanout= average number of blobs in a single tree bwlimit= maximum bytes/sec to transmit to server #,compress= set compression level to # (0-9, 9 is highest) [1] """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) handle_ctrl_c() git.check_repo_or_die() if not (opt.blobs or opt.tree or opt.commit or opt.name or opt.noop or opt.copy): o.fatal("use one or more of -b, -t, -c, -n, --noop, --copy") if (opt.noop or opt.copy) and (opt.blobs or opt.tree or opt.commit or opt.name): o.fatal('--noop and --copy are incompatible with -b, -t, -c, -n') if opt.blobs and (opt.tree or opt.commit or opt.name): o.fatal('-b is incompatible with -t, -c, -n') if extra and opt.git_ids: o.fatal("don't provide filenames when using --git-ids") if opt.verbose >= 2: git.verbose = opt.verbose - 1 opt.bench = 1 if opt.max_pack_size: git.max_pack_size = parse_num(opt.max_pack_size) if opt.max_pack_objects: git.max_pack_objects = parse_num(opt.max_pack_objects) if opt.fanout: hashsplit.fanout = parse_num(opt.fanout) if opt.blobs: hashsplit.fanout = 0 if opt.bwlimit: client.bwlimit = parse_num(opt.bwlimit) if opt.date: date = parse_date_or_fatal(opt.date, o.fatal) else: date = time.time() total_bytes = 0 def prog(filenum, nbytes): global total_bytes total_bytes += nbytes if filenum > 0: qprogress('Splitting: file #%d, %d kbytes\r' % (filenum+1, total_bytes/1024)) else: qprogress('Splitting: %d kbytes\r' % (total_bytes/1024)) is_reverse = os.environ.get('BUP_SERVER_REVERSE') if is_reverse and opt.remote: o.fatal("don't use -r in reverse mode; it's automatic") start_time = time.time() if opt.name and not valid_save_name(opt.name): o.fatal("'%s' is not a valid branch name." % opt.name) refname = opt.name and 'refs/heads/%s' % opt.name or None if opt.noop or opt.copy: cli = pack_writer = oldref = None elif opt.remote or is_reverse: cli = client.Client(opt.remote) oldref = refname and cli.read_ref(refname) or None pack_writer = cli.new_packwriter(compression_level=opt.compress) else: cli = None oldref = refname and git.read_ref(refname) or None pack_writer = git.PackWriter(compression_level=opt.compress) if opt.git_ids: # the input is actually a series of git object ids that we should retrieve # and split. # # This is a bit messy, but basically it converts from a series of # CatPipe.get() iterators into a series of file-type objects. # It would be less ugly if either CatPipe.get() returned a file-like object # (not very efficient), or split_to_shalist() expected an iterator instead # of a file. cp = git.CatPipe() class IterToFile: def __init__(self, it): self.it = iter(it) def read(self, size): v = next(self.it, None) return v or '' def read_ids(): while 1: line = sys.stdin.readline() if not line: break if line: line = line.strip() try: it = cp.get(line.strip()) next(it, None) # skip the file type except KeyError as e: add_error('error: %s' % e) continue yield IterToFile(it) files = read_ids() else: # the input either comes from a series of files or from stdin. files = extra and (open(fn) for fn in extra) or [sys.stdin] if pack_writer and opt.blobs: shalist = hashsplit.split_to_blobs(pack_writer.new_blob, files, keep_boundaries=opt.keep_boundaries, progress=prog) for (sha, size, level) in shalist: print sha.encode('hex') reprogress() elif pack_writer: # tree or commit or name if opt.name: # insert dummy_name which may be used as a restore target mode, sha = \ hashsplit.split_to_blob_or_tree(pack_writer.new_blob, pack_writer.new_tree, files, keep_boundaries=opt.keep_boundaries, progress=prog) splitfile_name = git.mangle_name('data', hashsplit.GIT_MODE_FILE, mode) shalist = [(mode, splitfile_name, sha)] else: shalist = hashsplit.split_to_shalist( pack_writer.new_blob, pack_writer.new_tree, files, keep_boundaries=opt.keep_boundaries, progress=prog) tree = pack_writer.new_tree(shalist) else: last = 0 it = hashsplit.hashsplit_iter(files, keep_boundaries=opt.keep_boundaries, progress=prog) for (blob, level) in it: hashsplit.total_split += len(blob) if opt.copy: sys.stdout.write(str(blob)) megs = hashsplit.total_split/1024/1024 if not opt.quiet and last != megs: last = megs if opt.verbose: log('\n') if opt.tree: print tree.encode('hex') if opt.commit or opt.name: msg = 'bup split\n\nGenerated by command:\n%r\n' % sys.argv ref = opt.name and ('refs/heads/%s' % opt.name) or None userline = '%s <%s@%s>' % (userfullname(), username(), hostname()) commit = pack_writer.new_commit(tree, oldref, userline, date, None, userline, date, None, msg) if opt.commit: print commit.encode('hex') if pack_writer: pack_writer.close() # must close before we can update the ref if opt.name: if cli: cli.update_ref(refname, commit, oldref) else: git.update_ref(refname, commit, oldref) if cli: cli.close() secs = time.time() - start_time size = hashsplit.total_split if opt.bench: log('bup: %.2fkbytes in %.2f secs = %.2f kbytes/sec\n' % (size/1024., secs, size/1024./secs)) if saved_errors: log('WARNING: %d errors encountered while saving.\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/tag-cmd.py000077500000000000000000000042121303127641400146660ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import os, sys from bup import git, options from bup.helpers import debug1, handle_ctrl_c, log # FIXME: review for safe writes. handle_ctrl_c() optspec = """ bup tag bup tag [-f] bup tag [-f] -d -- d,delete= Delete a tag f,force Overwrite existing tag, or ignore missing tag when deleting """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) git.check_repo_or_die() tags = [t for sublist in git.tags().values() for t in sublist] if opt.delete: # git.delete_ref() doesn't complain if a ref doesn't exist. We # could implement this verification but we'd need to read in the # contents of the tag file and pass the hash, and we already know # about the tag's existance via "tags". if not opt.force and opt.delete not in tags: log("error: tag '%s' doesn't exist\n" % opt.delete) sys.exit(1) tag_file = 'refs/tags/%s' % opt.delete git.delete_ref(tag_file) sys.exit(0) if not extra: for t in tags: print t sys.exit(0) elif len(extra) < 2: o.fatal('no commit ref or hash given.') (tag_name, commit) = extra[:2] if not tag_name: o.fatal("tag name must not be empty.") debug1("args: tag name = %s; commit = %s\n" % (tag_name, commit)) if tag_name in tags and not opt.force: log("bup: error: tag '%s' already exists\n" % tag_name) sys.exit(1) if tag_name.startswith('.'): o.fatal("'%s' is not a valid tag name." % tag_name) try: hash = git.rev_parse(commit) except git.GitError as e: log("bup: error: %s" % e) sys.exit(2) if not hash: log("bup: error: commit %s not found.\n" % commit) sys.exit(2) pL = git.PackIdxList(git.repo('objects/pack')) if not pL.exists(hash): log("bup: error: commit %s not found.\n" % commit) sys.exit(2) tag_file = git.repo('refs/tags/%s' % tag_name) try: tag = file(tag_file, 'w') except OSError as e: log("bup: error: could not create tag '%s': %s" % (tag_name, e)) sys.exit(3) tag.write(hash.encode('hex')) tag.close() bup-0.29/cmd/tick-cmd.py000077500000000000000000000006101303127641400150430ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, time from bup import options optspec = """ bup tick """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if extra: o.fatal("no arguments expected") t = time.time() tleft = 1 - (t - int(t)) time.sleep(tleft) bup-0.29/cmd/version-cmd.py000077500000000000000000000031711303127641400156030ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import re, sys from bup import options from bup import version version_rx = re.compile(r'^[0-9]+\.[0-9]+(\.[0-9]+)?(-[0-9]+-g[0-9abcdef]+)?$') optspec = """ bup version [--date|--commit|--tag] -- date display the date this version of bup was created commit display the git commit id of this version of bup tag display the tag name of this version. If no tag is available, display the commit id """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) total = (opt.date or 0) + (opt.commit or 0) + (opt.tag or 0) if total > 1: o.fatal('at most one option expected') def version_date(): """Format bup's version date string for output.""" return version.DATE.split(' ')[0] def version_commit(): """Get the commit hash of bup's current version.""" return version.COMMIT def version_tag(): """Format bup's version tag (the official version number). When generated from a commit other than one pointed to with a tag, the returned string will be "unknown-" followed by the first seven positions of the commit hash. """ names = version.NAMES.strip() assert(names[0] == '(') assert(names[-1] == ')') names = names[1:-1] l = [n.strip() for n in names.split(',')] for n in l: if n.startswith('tag: ') and version_rx.match(n[5:]): return n[5:] return 'unknown-%s' % version.COMMIT[:7] if opt.date: print version_date() elif opt.commit: print version_commit() else: print version_tag() bup-0.29/cmd/web-cmd.py000077500000000000000000000207621303127641400147000ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble from collections import namedtuple import mimetypes, os, posixpath, signal, stat, sys, time, urllib, webbrowser from bup import options, git, vfs from bup.helpers import (chunkyreader, debug1, handle_ctrl_c, log, resource_path, saved_errors) try: from tornado import gen from tornado.httpserver import HTTPServer from tornado.ioloop import IOLoop from tornado.netutil import bind_unix_socket import tornado.web except ImportError: log('error: cannot find the python "tornado" module; please install it\n') sys.exit(1) handle_ctrl_c() def _compute_breadcrumbs(path, show_hidden=False): """Returns a list of breadcrumb objects for a path.""" breadcrumbs = [] breadcrumbs.append(('[root]', '/')) path_parts = path.split('/')[1:-1] full_path = '/' for part in path_parts: full_path += part + "/" url_append = "" if show_hidden: url_append = '?hidden=1' breadcrumbs.append((part, full_path+url_append)) return breadcrumbs def _contains_hidden_files(n): """Return True if n contains files starting with a '.', False otherwise.""" for sub in n: name = sub.name if len(name)>1 and name.startswith('.'): return True return False def _compute_dir_contents(n, path, show_hidden=False): """Given a vfs node, returns an iterator for display info of all subs.""" url_append = "" if show_hidden: url_append = "?hidden=1" if path != "/": yield('..', '../' + url_append, '') for sub in n: display = sub.name link = urllib.quote(sub.name) # link should be based on fully resolved type to avoid extra # HTTP redirect. if stat.S_ISDIR(sub.try_resolve().mode): link += "/" if not show_hidden and len(display)>1 and display.startswith('.'): continue size = None if stat.S_ISDIR(sub.mode): display += '/' elif stat.S_ISLNK(sub.mode): display += '@' else: size = sub.size() size = (opt.human_readable and format_filesize(size)) or size yield (display, link + url_append, size) class BupRequestHandler(tornado.web.RequestHandler): def decode_argument(self, value, name=None): if name == 'path': return value return super(BupRequestHandler, self).decode_argument(value, name) def get(self, path): return self._process_request(path) def head(self, path): return self._process_request(path) def _process_request(self, path): path = urllib.unquote(path) print 'Handling request for %s' % path try: n = top.resolve(path) except vfs.NoSuchFile: self.send_error(404) return f = None if stat.S_ISDIR(n.mode): self._list_directory(path, n) else: self._get_file(path, n) def _list_directory(self, path, n): """Helper to produce a directory listing. Return value is either a file object, or None (indicating an error). In either case, the headers are sent. """ if not path.endswith('/') and len(path) > 0: print 'Redirecting from %s to %s' % (path, path + '/') return self.redirect(path + '/', permanent=True) try: show_hidden = int(self.request.arguments.get('hidden', [0])[-1]) except ValueError as e: show_hidden = False self.render( 'list-directory.html', path=path, breadcrumbs=_compute_breadcrumbs(path, show_hidden), files_hidden=_contains_hidden_files(n), hidden_shown=show_hidden, dir_contents=_compute_dir_contents(n, path, show_hidden)) @gen.coroutine def _get_file(self, path, n): """Process a request on a file. Return value is either a file object, or None (indicating an error). In either case, the headers are sent. """ ctype = self._guess_type(path) self.set_header("Last-Modified", self.date_time_string(n.mtime)) self.set_header("Content-Type", ctype) size = n.size() self.set_header("Content-Length", str(size)) assert(len(n.hash) == 20) self.set_header("Etag", n.hash.encode('hex')) if self.request.method != 'HEAD': f = n.open() try: it = chunkyreader(f) for blob in chunkyreader(f): self.write(blob) finally: f.close() raise gen.Return() def _guess_type(self, path): """Guess the type of a file. Argument is a PATH (a filename). Return value is a string of the form type/subtype, usable for a MIME Content-type header. The default implementation looks the file's extension up in the table self.extensions_map, using application/octet-stream as a default; however it would be permissible (if slow) to look inside the data to make a better guess. """ base, ext = posixpath.splitext(path) if ext in self.extensions_map: return self.extensions_map[ext] ext = ext.lower() if ext in self.extensions_map: return self.extensions_map[ext] else: return self.extensions_map[''] if not mimetypes.inited: mimetypes.init() # try to read system mime.types extensions_map = mimetypes.types_map.copy() extensions_map.update({ '': 'text/plain', # Default '.py': 'text/plain', '.c': 'text/plain', '.h': 'text/plain', }) def date_time_string(self, t): return time.strftime('%a, %d %b %Y %H:%M:%S', time.gmtime(t)) io_loop = None def handle_sigterm(signum, frame): global io_loop debug1('\nbup-web: signal %d received\n' % signum) log('Shutdown requested\n') if not io_loop: sys.exit(0) io_loop.stop() signal.signal(signal.SIGTERM, handle_sigterm) UnixAddress = namedtuple('UnixAddress', ['path']) InetAddress = namedtuple('InetAddress', ['host', 'port']) optspec = """ bup web [[hostname]:port] bup web unix://path -- human-readable display human readable file sizes (i.e. 3.9K, 4.7M) browser show repository in default browser (incompatible with unix://) """ o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) > 1: o.fatal("at most one argument expected") if len(extra) == 0: address = InetAddress(host='127.0.0.1', port=8080) else: bind_url = extra[0] if bind_url.startswith('unix://'): address = UnixAddress(path=bind_url[len('unix://'):]) else: addr_parts = extra[0].split(':', 1) if len(addr_parts) == 1: host = '127.0.0.1' port = addr_parts[0] else: host, port = addr_parts try: port = int(port) except (TypeError, ValueError) as ex: o.fatal('port must be an integer, not %r', port) address = InetAddress(host=host, port=port) git.check_repo_or_die() top = vfs.RefList(None) settings = dict( debug = 1, template_path = resource_path('web'), static_path = resource_path('web/static') ) # Disable buffering on stdout, for debug messages sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) application = tornado.web.Application([ (r"(?P/.*)", BupRequestHandler), ], **settings) http_server = HTTPServer(application) io_loop_pending = IOLoop.instance() if isinstance(address, InetAddress): http_server.listen(address.port, address.host) try: sock = http_server._socket # tornado < 2.0 except AttributeError as e: sock = http_server._sockets.values()[0] print "Serving HTTP on %s:%d..." % sock.getsockname() if opt.browser: browser_addr = 'http://' + address[0] + ':' + str(address[1]) io_loop_pending.add_callback(lambda : webbrowser.open(browser_addr)) elif isinstance(address, UnixAddress): unix_socket = bind_unix_socket(address.path) http_server.add_socket(unix_socket) print "Serving HTTP on filesystem socket %r" % address.path else: log('error: unexpected address %r', address) sys.exit(1) io_loop = io_loop_pending io_loop.start() if saved_errors: log('WARNING: %d errors encountered while saving.\n' % len(saved_errors)) sys.exit(1) bup-0.29/cmd/xstat-cmd.py000077500000000000000000000070751303127641400152700ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble # Copyright (C) 2010 Rob Browning # # This code is covered under the terms of the GNU Library General # Public License as described in the bup LICENSE file. import sys, stat, errno from bup import metadata, options, xstat from bup.helpers import add_error, handle_ctrl_c, parse_timestamp, saved_errors, \ add_error, log def parse_timestamp_arg(field, value): res = str(value) # Undo autoconversion. try: res = parse_timestamp(res) except ValueError as ex: if ex.args: o.fatal('unable to parse %s resolution "%s" (%s)' % (field, value, ex)) else: o.fatal('unable to parse %s resolution "%s"' % (field, value)) if res != 1 and res % 10: o.fatal('%s resolution "%s" must be a power of 10' % (field, value)) return res optspec = """ bup xstat pathinfo [OPTION ...] -- v,verbose increase log output (can be used more than once) q,quiet don't show progress meter exclude-fields= exclude comma-separated fields include-fields= include comma-separated fields (definitive if first) atime-resolution= limit s, ms, us, ns, 10ns (value must be a power of 10) [ns] mtime-resolution= limit s, ms, us, ns, 10ns (value must be a power of 10) [ns] ctime-resolution= limit s, ms, us, ns, 10ns (value must be a power of 10) [ns] """ target_filename = '' active_fields = metadata.all_fields handle_ctrl_c() o = options.Options(optspec) (opt, flags, remainder) = o.parse(sys.argv[1:]) atime_resolution = parse_timestamp_arg('atime', opt.atime_resolution) mtime_resolution = parse_timestamp_arg('mtime', opt.mtime_resolution) ctime_resolution = parse_timestamp_arg('ctime', opt.ctime_resolution) treat_include_fields_as_definitive = True for flag, value in flags: if flag == '--exclude-fields': exclude_fields = frozenset(value.split(',')) for f in exclude_fields: if not f in metadata.all_fields: o.fatal(f + ' is not a valid field name') active_fields = active_fields - exclude_fields treat_include_fields_as_definitive = False elif flag == '--include-fields': include_fields = frozenset(value.split(',')) for f in include_fields: if not f in metadata.all_fields: o.fatal(f + ' is not a valid field name') if treat_include_fields_as_definitive: active_fields = include_fields treat_include_fields_as_definitive = False else: active_fields = active_fields | include_fields opt.verbose = opt.verbose or 0 opt.quiet = opt.quiet or 0 metadata.verbose = opt.verbose - opt.quiet first_path = True for path in remainder: try: m = metadata.from_path(path, archive_path = path) except (OSError,IOError) as e: if e.errno == errno.ENOENT: add_error(e) continue else: raise if metadata.verbose >= 0: if not first_path: print if atime_resolution != 1: m.atime = (m.atime / atime_resolution) * atime_resolution if mtime_resolution != 1: m.mtime = (m.mtime / mtime_resolution) * mtime_resolution if ctime_resolution != 1: m.ctime = (m.ctime / ctime_resolution) * ctime_resolution print metadata.detailed_str(m, active_fields) first_path = False if saved_errors: log('WARNING: %d errors encountered.\n' % len(saved_errors)) sys.exit(1) else: sys.exit(0) bup-0.29/config/000077500000000000000000000000001303127641400135005ustar00rootroot00000000000000bup-0.29/config/.gitignore000066400000000000000000000001121303127641400154620ustar00rootroot00000000000000config.cmd config.h config.log config.mak config.md config.sub config.varsbup-0.29/config/config.vars.in000066400000000000000000000001601303127641400162440ustar00rootroot00000000000000CONFIGURE_FILES=@CONFIGURE_FILES@ GENERATED_FILES=@GENERATED_FILES@ bup_make=@bup_make@ bup_python=@bup_python@ bup-0.29/config/configure000077500000000000000000000076551303127641400154240ustar00rootroot00000000000000#!/usr/bin/env bash bup_find_prog() { # Prints prog path to stdout or nothing. local name="$1" result="$2" TLOGN "checking for $name" if ! [ "$result" ]; then result=`acLookFor "$name"` fi TLOG " ($result)" echo "$result" } bup_try_c_code() { local code="$1" tmpdir rc if test -z "$code"; then AC_FAIL "No code provided to test compile" fi tmpdir="$(mktemp -d "bup-try-c-compile-XXXXXXX")" || exit $? echo "$code" > "$tmpdir/test.c" || exit $? $AC_CC -Wall -Werror -c -o "$tmpdir/test" "$tmpdir/test.c" rc=$? rm -r "$tmpdir" || exit $? return $rc } TARGET=bup . ./configure.inc AC_INIT $TARGET if ! AC_PROG_CC; then LOG " You need to have a functional C compiler to build $TARGET" exit 1 fi MAKE="$(bup_find_prog make "$MAKE")" if test -z "$MAKE"; then MAKE="$(bup_find_prog gmake "$GMAKE")" fi if test -z "$MAKE"; then AC_FAIL "ERROR: unable to find make" fi if ! ($MAKE --version | grep "GNU Make"); then AC_FAIL "ERROR: $MAKE is not GNU Make" fi MAKE_VERSION=`$MAKE --version | grep "GNU Make" | awk '{print $3}'` if [ -z "$MAKE_VERSION" ]; then AC_FAIL "ERROR: $MAKE --version does not return sensible output?" fi expr "$MAKE_VERSION" '>=' '3.81' || AC_FAIL "ERROR: $MAKE must be >= version 3.81" AC_SUB bup_make "$MAKE" bup_python="$PYTHON" test -z "$bup_python" && bup_python="$(bup_find_prog python2.7 '')" test -z "$bup_python" && bup_python="$(bup_find_prog python2.6 '')" test -z "$bup_python" && bup_python="$(bup_find_prog python2 '')" test -z "$bup_python" && bup_python="$(bup_find_prog python '')" if test -z "$bup_python"; then AC_FAIL "ERROR: unable to find python" else AC_SUB bup_python "$bup_python" fi if test -z "$(bup_find_prog git '')"; then AC_FAIL "ERROR: unable to find git" fi # For stat. AC_CHECK_HEADERS sys/stat.h AC_CHECK_HEADERS sys/types.h # For stat and mincore. AC_CHECK_HEADERS unistd.h # For mincore. AC_CHECK_HEADERS sys/mman.h # For FS_IOC_GETFLAGS and FS_IOC_SETFLAGS. AC_CHECK_HEADERS linux/fs.h AC_CHECK_HEADERS sys/ioctl.h # On GNU/kFreeBSD utimensat is defined in GNU libc, but won't work. if [ -z "$OS_GNU_KFREEBSD" ]; then AC_CHECK_FUNCS utimensat fi AC_CHECK_FUNCS utimes AC_CHECK_FUNCS lutimes AC_CHECK_FUNCS mincore mincore_incore_code=" #if 0$ac_defined_HAVE_UNISTD_H #include #endif #if 0$ac_defined_HAVE_SYS_MMAN_H #include #endif int main(int argc, char **argv) { if (MINCORE_INCORE) return 0; } " mincore_buf_type_code() { local vec_type="$1" echo " #include int main(int argc, char **argv) { void *x = 0; $vec_type *buf = 0; return mincore(x, 0, buf); }" || exit $? } if test "$ac_defined_HAVE_MINCORE"; then TLOGN "checking for MINCORE_INCORE" if bup_try_c_code "$mincore_incore_code"; then AC_DEFINE BUP_HAVE_MINCORE_INCORE 1 TLOG ' (found)' else TLOG ' (not found)' fi TLOGN "checking mincore buf type" if bup_try_c_code "$(mincore_buf_type_code char)"; then AC_DEFINE BUP_MINCORE_BUF_TYPE 'char' TLOG ' (char)' elif bup_try_c_code "$(mincore_buf_type_code 'unsigned char')"; then AC_DEFINE BUP_MINCORE_BUF_TYPE 'unsigned char' TLOG ' (unsigned char)' else AC_FAIL "ERROR: unexpected mincore definition; please notify bup-list@googlegroups.com" fi fi AC_CHECK_FIELD stat st_atim sys/types.h sys/stat.h unistd.h AC_CHECK_FIELD stat st_mtim sys/types.h sys/stat.h unistd.h AC_CHECK_FIELD stat st_ctim sys/types.h sys/stat.h unistd.h AC_CHECK_FIELD stat st_atimensec sys/types.h sys/stat.h unistd.h AC_CHECK_FIELD stat st_mtimensec sys/types.h sys/stat.h unistd.h AC_CHECK_FIELD stat st_ctimensec sys/types.h sys/stat.h unistd.h AC_CHECK_FIELD tm tm_gmtoff time.h __config_files="$__config_files config.vars.sh" AC_OUTPUT config.vars printf 'bup_make=%q\n' "$MAKE" > config.vars.sh printf 'bup_python=%q\n' "$bup_python" >> config.vars.sh bup-0.29/config/configure.inc000066400000000000000000000643441303127641400161670ustar00rootroot00000000000000# -*-shell-script-*- # @(#) configure.inc 1.40@(#) # Copyright (c) 1999-2007 David Parsons. All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions # are met: # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # 3. My name may not be used to endorse or promote products derived # from this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY DAVID PARSONS ``AS IS'' AND ANY # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, # THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL DAVID # PARSONS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED # TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING # IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF # THE POSSIBILITY OF SUCH DAMAGE. # # # this preamble code is executed when this file is sourced and it picks # interesting things off the command line. # ac_default_path="/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin" ac_standard="--src=DIR where the source lives (.) --prefix=DIR where to install the final product (/usr/local) --execdir=DIR where to put executables (prefix/bin) --sbindir=DIR where to put static executables (prefix/sbin) --confdir=DIR where to put configuration information (/etc) --libdir=DIR where to put libraries (prefix/lib) --libexecdir=DIR where to put private executables --mandir=DIR where to put manpages" __fail=exit if dirname B/A 2>/dev/null >/dev/null; then __ac_dirname() { dirname "$1" } else __ac_dirname() { echo "$1" | sed -e 's:/[^/]*$::' } fi ac_progname=$0 ac_configure_command= Q=\' for x in "$@"; do ac_configure_command="$ac_configure_command $Q$x$Q" done # ac_configure_command="$*" __d=`__ac_dirname "$ac_progname"` if [ "$__d" = "$ac_progname" ]; then AC_SRCDIR=`pwd` else AC_SRCDIR=`cd $__d;pwd` fi __ac_dir() { if test -d "$1"; then (cd "$1";pwd) else echo "$1"; fi } while [ $# -gt 0 ]; do unset matched case X"$1" in X--src|X--srcdir) AC_SRCDIR=`__ac_dir "$2"` _set_srcdir=1 shift 2;; X--src=*|X--srcdir=*) __d=`echo "$1" | sed -e 's/^[^=]*=//'` AC_SRCDIR=`__ac_dir "$__d"` _set_srcdir=1 shift 1 ;; X--prefix) AC_PREFIX=`__ac_dir "$2"` _set_prefix=1 shift 2;; X--prefix=*) __d=`echo "$1"| sed -e 's/^[^=]*=//'` AC_PREFIX=`__ac_dir "$__d"` _set_prefix=1 shift 1;; X--confdir) AC_CONFDIR=`__ac_dir "$2"` _set_confdir=1 shift 2;; X--confdir=*) __d=`echo "$1" | sed -e 's/^[^=]*=//'` AC_CONFDIR=`__ac_dir "$__d"` _set_confdir=1 shift 1;; X--libexec|X--libexecdir) AC_LIBEXEC=`__ac_dir "$2"` _set_libexec=1 shift 2;; X--libexec=*|X--libexecdir=*) __d=`echo "$1" | sed -e 's/^[^=]*=//'` AC_LIBEXEC=`__ac_dir "$__d"` _set_libexec=1 shift 1;; X--lib|X--libdir) AC_LIBDIR=`__ac_dir "$2"` _set_libdir=1 shift 2;; X--lib=*|X--libdir=*) __d=`echo "$1" | sed -e 's/^[^=]*=//'` AC_LIBDIR=`__ac_dir "$__d"` _set_libdir=1 shift 1;; X--exec|X--execdir) AC_EXECDIR=`__ac_dir "$2"` _set_execdir=1 shift 2;; X--exec=*|X--execdir=*) __d=`echo "$1" | sed -e 's/^[^=]*=//'` AC_EXECDIR=`__ac_dir "$__d"` _set_execdir=1 shift 1;; X--sbin|X--sbindir) AC_SBINDIR=`__ac_dir "$2"` _set_sbindir=1 shift 2;; X--sbin=*|X--sbindir=*) __d=`echo "$1" | sed -e 's/^[^=]*=//'` AC_SBINDIR=`__ac_dir "$__d"` _set_sbindir=1 shift 1;; X--man|X--mandir) AC_MANDIR=`__ac_dir "$2"` _set_mandir=1 shift 2;; X--man=*|X--mandir=*) __d=`echo "$1" | sed -e 's/^[^=]*=//'` AC_MANDIR=`__ac_dir "$__d"` _set_mandir=1 shift 1;; X--use-*=*) _var=`echo "$1"| sed -n 's/^--use-\([A-Za-z][-A-Za-z0-9_]*\)=.*$/\1/p'` if [ "$_var" ]; then _val=`echo "$1" | sed -e 's/^--use-[^=]*=\(.*\)$/\1/'` _v=`echo $_var | tr '[a-z]' '[A-Z]' | tr '-' '_'` case X"$_val" in X[Yy][Ee][Ss]|X[Tt][Rr][Uu][Ee]) eval USE_${_v}=T ;; X[Nn][Oo]|X[Ff][Aa][Ll][Ss][Ee]) eval unset USE_${_v} ;; *) echo "Bad value for --use-$_var ; must be yes or no" exit 1 ;; esac else echo "Bad option $1. Use --help to show options" 1>&2 exit 1 fi shift 1 ;; X--use-*) _var=`echo "$1"|sed -n 's/^--use-\([A-Za-z][-A-Za-z0-9_]*\)$/\1/p'` _v=`echo $_var | tr '[a-z]' '[A-Z]' | tr '-' '_'` eval USE_${_v}=T shift 1;; X--with-*=*) _var=`echo "$1"| sed -n 's/^--with-\([A-Za-z][-A-Za-z0-9_]*\)=.*$/\1/p'` if [ "$_var" ]; then _val=`echo "$1" | sed -e 's/^--with-[^=]*=\(.*\)$/\1/'` _v=`echo $_var | tr '[a-z]' '[A-Z]' | tr '-' '_'` eval WITH_${_v}=\"$_val\" else echo "Bad option $1. Use --help to show options" 1>&2 exit 1 fi shift 1 ;; X--with-*) _var=`echo "$1" | sed -n 's/^--with-\([A-Za-z][A-Za-z0-9_-]*\)$/\1/p'` if [ "$_var" ]; then _v=`echo $_var | tr '[a-z]' '[A-Z]' | tr '-' '_'` eval WITH_${_v}=1 else echo "Bad option $1. Use --help to show options" 1>&2 exit 1 fi shift 1 ;; X--help) echo "$ac_standard" test "$ac_help" && echo "$ac_help" exit 0;; *) if [ "$LOCAL_AC_OPTIONS" ]; then eval "$LOCAL_AC_OPTIONS" else ac_error=T fi if [ "$ac_error" ]; then echo "Bad option $1. Use --help to show options" 1>&2 exit 1 fi ;; esac done # # echo w/o newline # echononl() { ${ac_echo:-echo} "${@}$ac_echo_nonl" } # # log something to the terminal and to a logfile. # LOG () { echo "$@" echo "$@" 1>&5 } # # log something to the terminal without a newline, and to a logfile with # a newline # LOGN () { echononl "$@" 1>&5 echo "$@" } # # log something to the terminal # TLOG () { echo "$@" 1>&5 } # # log something to the terminal, no newline # TLOGN () { echononl "$@" 1>&5 } # # AC_CONTINUE tells configure not to bomb if something fails, but to # continue blithely along # AC_CONTINUE () { __fail="return" } # # Emulate gnu autoconf's AC_CHECK_HEADERS() function # AC_CHECK_HEADERS () { AC_PROG_CC echo "/* AC_CHECK_HEADERS */" > /tmp/ngc$$.c for hdr in $*; do echo "#include <$hdr>" >> /tmp/ngc$$.c done echo "main() { }" >> /tmp/ngc$$.c LOGN "checking for header $hdr" if $AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c; then AC_DEFINE 'HAVE_'`echo $hdr | tr 'a-z' 'A-Z' | tr './' '_'` 1 TLOG " (found)" rc=0 else TLOG " (not found)" rc=1 fi rm -f /tmp/ngc$$.c /tmp/ngc$$ return $rc } # # emulate GNU autoconf's AC_CHECK_FUNCS function # AC_CHECK_FUNCS () { AC_PROG_CC F=$1 shift rm -f /tmp/ngc$$.c while [ "$1" ]; do echo "#include <$1>" >> /tmp/ngc$$.c shift done cat >> /tmp/ngc$$.c << EOF main() { $F(); } EOF LOGN "checking for the $F function" if $AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c $LIBS; then AC_DEFINE `echo ${2:-HAVE_$F} | tr 'a-z' 'A-Z'` 1 TLOG " (found)" rc=0 else echo "offending command was:" cat /tmp/ngc$$.c echo "$AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c $LIBS" TLOG " (not found)" rc=1 fi rm -f /tmp/ngc$$.c /tmp/ngc$$ return $rc } # # check to see if some structure exists # # usage: AC_CHECK_STRUCT structure {include ...} # AC_CHECK_STRUCT () { AC_PROG_CC struct=$1 shift rm -f /tmp/ngc$$.c for include in $*; do echo "#include <$include>" >> /tmp/ngc$$.c done cat >> /tmp/ngc$$.c << EOF main() { struct $struct foo; } EOF LOGN "checking for struct $struct" if $AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c $AC_LIBS 2>>config.log; then AC_DEFINE HAVE_STRUCT_`echo ${struct} | tr 'a-z' 'A-Z'` TLOG " (found)" rc=0 else TLOG " (not found)" rc=1 fi rm -f /tmp/ngc$$.c /tmp/ngc$$ return $rc } # # check to see if some structure contains a field # # usage: AC_CHECK_FIELD structure field {include ...} # AC_CHECK_FIELD () { AC_PROG_CC struct=$1 field=$2 shift 2 rm -f /tmp/ngc$$.c for include in $*;do echo "#include <$include>" >> /tmp/ngc$$.c done cat >> /tmp/ngc$$.c << EOF main() { struct $struct foo; foo.$field; } EOF LOGN "checking that struct $struct has a $field field" if $AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c $AC_LIBS 2>>config.log; then AC_DEFINE HAVE_`echo ${struct}_$field | tr 'a-z' 'A-Z'` TLOG " (yes)" rc=0 else TLOG " (no)" rc=1 fi rm -f /tmp/ngc$$.c /tmp/ngc$$ return $rc } # # check that the C compiler works # AC_PROG_CC () { test "$AC_CC" && return 0 cat > /tmp/ngc$$.c << \EOF #include main() { puts("hello, sailor"); } EOF TLOGN "checking the C compiler" unset AC_CFLAGS AC_LDFLAGS if [ "$CC" ] ; then AC_CC="$CC" elif [ "$WITH_PATH" ]; then AC_CC=`acLookFor cc` elif [ "`acLookFor cc`" ]; then # don't specify the full path if the user is looking in their $PATH # for a C compiler. AC_CC=cc fi # finally check for POSIX c89 test "$AC_CC" || AC_CC=`acLookFor c89` if [ ! "$AC_CC" ]; then TLOG " (no C compiler found)" $__fail 1 fi echo "checking out the C compiler" $AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c status=$? TLOGN " ($AC_CC)" if [ $status -eq 0 ]; then TLOG " ok" # check that the CFLAGS and LDFLAGS aren't bogus unset AC_CFLAGS AC_LDFLAGS if [ "$CFLAGS" ]; then test "$CFLAGS" && echo "validating CFLAGS=${CFLAGS}" if $AC_CC $CFLAGS -o /tmp/ngc$$.o /tmp/ngc$$.c ; then AC_CFLAGS=${CFLAGS:-"-g"} test "$CFLAGS" && echo "CFLAGS=\"${CFLAGS}\" are okay" elif [ "$CFLAGS" ]; then echo "ignoring bogus CFLAGS=\"${CFLAGS}\"" fi else AC_CFLAGS=-g fi if [ "$LDFLAGS" ]; then test "$LDFLAGS" && echo "validating LDFLAGS=${LDFLAGS}" if $AC_CC $LDFLAGS -o /tmp/ngc$$ /tmp/ngc$$.o; then AC_LDFLAGS=${LDFLAGS:-"-g"} test "$LDFLAGS" && TLOG "LDFLAGS=\"${LDFLAGS}\" are okay" elif [ "$LDFLAGS" ]; then TLOG "ignoring bogus LDFLAGS=\"${LDFLAGS}\"" fi else AC_LDFLAGS=${CFLAGS:-"-g"} fi else AC_FAIL " does not compile code properly" fi AC_SUB 'CC' "$AC_CC" rm -f /tmp/ngc$$ /tmp/ngc$$.c /tmp/ngc$$.o return $status } # # acLookFor actually looks for a program, without setting anything. # acLookFor () { path=${AC_PATH:-$ac_default_path} case "X$1" in X-[rx]) __mode=$1 shift ;; *) __mode=-x ;; esac oldifs="$IFS" for program in $*; do IFS=":" for x in $path; do if [ $__mode $x/$program -a -f $x/$program ]; then echo $x/$program break 2 fi done done IFS="$oldifs" unset __mode } # # check that a program exists and set its path # MF_PATH_INCLUDE () { SYM=$1; shift case X$1 in X-[rx]) __mode=$1 shift ;; *) unset __mode ;; esac TLOGN "looking for $1" DEST=`acLookFor $__mode $*` __sym=`echo "$SYM" | tr '[a-z]' '[A-Z]'` if [ "$DEST" ]; then TLOG " ($DEST)" echo "$1 is $DEST" AC_MAK $SYM AC_DEFINE PATH_$__sym \""$DEST"\" AC_SUB $__sym "$DEST" eval CF_$SYM=$DEST return 0 else #AC_SUB $__sym '' echo "$1 is not found" TLOG " (not found)" return 1 fi } # # AC_INIT starts the ball rolling # # After AC_INIT, fd's 1 and 2 point to config.log # and fd 5 points to what used to be fd 1 # AC_INIT () { __config_files="config.cmd config.sub config.h config.mak config.log" __config_detritus="config.h.tmp" rm -f $__config_files $__config_detritus __cwd=`pwd` exec 5>&1 1>$__cwd/config.log 2>&1 AC_CONFIGURE_FOR=__AC_`echo $1 | sed -e 's/\..$//' | tr 'a-z' 'A-Z' | tr ' ' '_'`_D # check to see whether to use echo -n or echo ...\c # echo -n hello > $$ echo world >> $$ if grep "helloworld" $$ >/dev/null; then ac_echo="echo -n" echo "[echo -n] works" else ac_echo="echo" echo 'hello\c' > $$ echo 'world' >> $$ if grep "helloworld" $$ >/dev/null; then ac_echo_nonl='\c' echo "[echo ...\\c] works" fi fi rm -f $$ LOG "Configuring for [$1]" rm -f $__cwd/config.h cat > $__cwd/config.h.tmp << EOF /* * configuration for $1${2:+" ($2)"}, generated `date` * by ${LOGNAME:-`whoami`}@`hostname` */ #ifndef $AC_CONFIGURE_FOR #define $AC_CONFIGURE_FOR 1 EOF unset __share if [ -d $AC_PREFIX/share/man ]; then for t in 1 2 3 4 5 6 7 8 9; do if [ -d $AC_PREFIX/share/man/man$t ]; then __share=/share elif [ -d $AC_PREFIX/share/man/cat$t ]; then __share=/share fi done else __share= fi if [ -d $AC_PREFIX/libexec ]; then __libexec=libexec else __libexec=lib fi AC_PREFIX=${AC_PREFIX:-/usr/local} AC_EXECDIR=${AC_EXECDIR:-$AC_PREFIX/bin} AC_SBINDIR=${AC_SBINDIR:-$AC_PREFIX/sbin} AC_LIBDIR=${AC_LIBDIR:-$AC_PREFIX/lib} AC_MANDIR=${AC_MANDIR:-$AC_PREFIX$__share/man} AC_LIBEXEC=${AC_LIBEXEC:-$AC_PREFIX/$__libexec} AC_CONFDIR=${AC_CONFDIR:-/etc} AC_PATH=${WITH_PATH:-$PATH} AC_PROG_CPP AC_PROG_INSTALL ac_os=`uname -s | sed 's/[-_].*//; s/[^a-zA-Z0-9]/_/g'` _os=`echo $ac_os | tr '[a-z]' '[A-Z]'` AC_DEFINE OS_$_os 1 eval OS_${_os}=1 unset _os } # # AC_LIBRARY checks to see if a given library exists and contains the # given function. # usage: AC_LIBRARY function library [alternate ...] # AC_LIBRARY() { SRC=$1 shift __acllibs= __aclhdrs= for x in "$@"; do case X"$x" in X-l*) __acllibs="$__acllibs $x" ;; *) __aclhdrs="$__aclhdrs $x" ;; esac done # first see if the function can be found in any of the # current libraries AC_QUIET AC_CHECK_FUNCS $SRC $__aclhdrs && return 0 # then search through the list of libraries __libs="$LIBS" for x in $__acllibs; do LIBS="$__libs $x" if AC_QUIET AC_CHECK_FUNCS $SRC $__aclhdrs; then AC_LIBS="$AC_LIBS $x" return 0 fi done return 1 } # # AC_PROG_LEX checks to see if LEX exists, and if it's lex or flex. # AC_PROG_LEX() { TLOGN "looking for lex " DEST=`acLookFor lex` if [ "$DEST" ]; then AC_MAK LEX AC_DEFINE PATH_LEX \"$DEST\" AC_SUB 'LEX' "$DEST" echo "lex is $DEST" else DEST=`acLookFor flex` if [ "$DEST" ]; then AC_MAK FLEX AC_DEFINE 'LEX' \"$DEST\" AC_SUB 'LEX', "$DEST" echo "lex is $DEST" else AC_SUB LEX '' echo "neither lex or flex found" TLOG " (not found)" return 1 fi fi if AC_LIBRARY yywrap -ll -lfl; then TLOG "($DEST)" return 0 fi TLOG "(no lex library found)" return 1 } # # AC_PROG_YACC checks to see if YACC exists, and if it's bison or # not. # AC_PROG_YACC () { TLOGN "looking for yacc " DEST=`acLookFor yacc` if [ "$DEST" ]; then AC_MAK YACC AC_DEFINE PATH_YACC \"$DEST\" AC_SUB 'YACC' "$DEST" TLOG "($DEST)" echo "yacc is $DEST" else DEST=`acLookFor bison` if [ "$DEST" ]; then AC_MAK BISON AC_DEFINE 'YACC' \"$DEST\" AC_SUB 'YACC' "$DEST -y" echo "yacc is $DEST -y" TLOG "($DEST -y)" else AC_SUB 'YACC' '' echo "neither yacc or bison found" TLOG " (not found)" return 1 fi fi return 0 } # # AC_PROG_LN_S checks to see if ln exists, and, if so, if ln -s works # AC_PROG_LN_S () { test "$AC_FIND_PROG" || AC_PROG_FIND test "$AC_FIND_PROG" || return 1 TLOGN "looking for \"ln -s\"" DEST=`acLookFor ln` if [ "$DEST" ]; then rm -f /tmp/b$$ $DEST -s /tmp/a$$ /tmp/b$$ if [ "`$AC_FIND_PROG /tmp/b$$ -type l -print`" ]; then TLOG " ($DEST)" echo "$DEST exists, and ln -s works" AC_SUB 'LN_S' "$DEST -s" rm -f /tmp/b$$ else AC_SUB 'LN_S' '' TLOG " ($DEST exists, but -s does not seem to work)" echo "$DEST exists, but ln -s doesn't seem to work" rm -f /tmp/b$$ return 1 fi else AC_SUB 'LN_S' '' echo "ln not found" TLOG " (not found)" return 1 fi } # # AC_PROG_FIND looks for the find program and sets the FIND environment # variable # AC_PROG_FIND () { if test -z "$AC_FIND_PROG"; then MF_PATH_INCLUDE FIND find rc=$? AC_FIND_PROG=$DEST return $rc fi return 0 } # # AC_PROG_AWK looks for the awk program and sets the AWK environment # variable # AC_PROG_AWK () { if test -z "$AC_AWK_PROG"; then MF_PATH_INCLUDE AWK awk rc=$? AC_AWK_PROG=$DEST return $rc fi return 0 } # # AC_PROG_SED looks for the sed program and sets the SED environment # variable # AC_PROG_SED () { if test -z "$AC_SED_PROG"; then MF_PATH_INCLUDE SED sed rc=$? AC_SED_PROG=$DEST return $rc fi return 0 } # # AC_HEADER_SYS_WAIT looks for sys/wait.h # AC_HEADER_SYS_WAIT () { AC_CHECK_HEADERS sys/wait.h || return 1 } # # AC_TYPE_PID_T checks to see if the pid_t type exists # AC_TYPE_PID_T () { cat > /tmp/pd$$.c << EOF #include main() { pid_t me; } EOF LOGN "checking for pid_t" if $AC_CC -c /tmp/pd$$.c -o /tmp/pd$$.o; then TLOG " (found)" rc=0 else echo "typedef int pid_t;" >> $__cwd/config.h.tmp TLOG " (not found)" rc=1 fi rm -f /tmp/pd$$.o /tmp/pd$$.c return $rc } # # AC_C_CONST checks to see if the compiler supports the const keyword # AC_C_CONST () { cat > /tmp/pd$$.c << EOF const char me=1; EOF LOGN "checking for \"const\" keyword" if $AC_CC -c /tmp/pd$$.c -o /tmp/pd$$.o; then TLOG " (yes)" rc=0 else AC_DEFINE 'const' '/**/' TLOG " (no)" rc=1 fi rm -f /tmp/pd$$.o /tmp/pd$$.c return $rc } # # AC_SCALAR_TYPES checks to see if the compiler can generate 2 and 4 byte ints. # AC_SCALAR_TYPES () { cat > /tmp/pd$$.c << EOF #include main() { unsigned long v_long; unsigned int v_int; unsigned short v_short; if (sizeof v_long == 4) puts("#define DWORD unsigned long"); else if (sizeof v_int == 4) puts("#define DWORD unsigned int"); else exit(1); if (sizeof v_int == 2) puts("#define WORD unsigned int"); else if (sizeof v_short == 2) puts("#define WORD unsigned short"); else exit(2); puts("#define BYTE unsigned char"); exit(0); } EOF rc=1 LOGN "defining WORD & DWORD scalar types" if $AC_CC /tmp/pd$$.c -o /tmp/pd$$; then if /tmp/pd$$ >> $__cwd/config.h.tmp; then rc=0 fi fi case "$rc" in 0) TLOG "" ;; *) TLOG " ** FAILED **" ;; esac rm -f /tmp/pd$$ /tmp/pd$$.c } # # AC_OUTPUT generates makefiles from makefile.in's # AC_OUTPUT () { cd $__cwd AC_SUB 'LIBS' "$AC_LIBS" AC_SUB 'CONFIGURE_FILES' "$__config_files" AC_SUB 'CONFIGURE_DETRITUS' "$__config_detritus" AC_SUB 'GENERATED_FILES' "$*" AC_SUB 'CFLAGS' "$AC_CFLAGS" AC_SUB 'FCFLAGS' "$AC_FCFLAGS" AC_SUB 'CXXFLAGS' "$AC_CXXFLAGS" AC_SUB 'LDFLAGS' "$AC_LDFLAGS" AC_SUB 'srcdir' "$AC_SRCDIR" AC_SUB 'prefix' "$AC_PREFIX" AC_SUB 'exedir' "$AC_EXECDIR" AC_SUB 'sbindir' "$AC_SBINDIR" AC_SUB 'libdir' "$AC_LIBDIR" AC_SUB 'libexec' "$AC_LIBEXEC" AC_SUB 'confdir' "$AC_CONFDIR" AC_SUB 'mandir' "$AC_MANDIR" if [ -r config.sub ]; then test "$AC_SED_PROG" || AC_PROG_SED test "$AC_SED_PROG" || return 1 echo >> config.h.tmp echo "#endif/* ${AC_CONFIGURE_FOR} */" >> config.h.tmp rm -f config.cmd Q=\' cat - > config.cmd << EOF #! /bin/sh ${CXX:+CXX=${Q}${CXX}${Q}} ${CXXFLAGS:+CXXFLAGS=${Q}${CXXFLAGS}${Q}} ${FC:+FC=${Q}${FC}${Q}} ${FCFLAGS:+FCFLAGS=${Q}${FCFLAGS}${Q}} ${CC:+CC=${Q}${CC}${Q}} ${CFLAGS:+CFLAGS=${Q}${CFLAGS}${Q}} $ac_progname $ac_configure_command EOF chmod +x config.cmd __d=$AC_SRCDIR for makefile in $*;do if test -r $__d/${makefile}.in; then LOG "generating $makefile" ./config.md `__ac_dirname ./$makefile` 2>/dev/null $AC_SED_PROG -f config.sub < $__d/${makefile}.in > $makefile __config_files="$__config_files $makefile" else LOG "WARNING: ${makefile}.in does not exist!" fi done unset __d else echo fi cp $__cwd/config.h.tmp $__cwd/config.h } # # AC_CHECK_FLOCK checks to see if flock() exists and if the LOCK_NB argument # works properly. # AC_CHECK_FLOCK() { AC_CHECK_HEADERS sys/types.h sys/file.h fcntl.h cat << EOF > $$.c #include #include #include #include main() { int x = open("$$.c", O_RDWR, 0666); int y = open("$$.c", O_RDWR, 0666); if (flock(x, LOCK_EX) != 0) exit(1); if (flock(y, LOCK_EX|LOCK_NB) == 0) exit(1); exit(0); } EOF LOGN "checking for flock()" HAS_FLOCK=0 if $AC_CC -o flock $$.c ; then if ./flock ; then LOG " (found)" HAS_FLOCK=1 AC_DEFINE HAS_FLOCK else LOG " (bad)" fi else LOG " (no)" fi rm -f flock $$.c case "$HAS_FLOCK" in 0) return 1 ;; *) return 0 ;; esac } # # AC_CHECK_RESOLVER finds out whether the berkeley resolver is # present on this system. # AC_CHECK_RESOLVER () { AC_PROG_CC TLOGN "checking for the Berkeley resolver library" cat > /tmp/ngc$$.c << EOF #include #include #include #include main() { char bfr[256]; res_init(); res_query("hello", C_IN, T_A, bfr, sizeof bfr); } EOF # first see if res_init() and res_query() actually exist... if $AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c; then __extralib= elif $AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c -lresolv; then __extralib=-lresolv AC_LIBS="$AC_LIBS -lresolv" else TLOG " (not found)" rm -f /tmp/ngc$$.c return 1 fi # if res_init() and res_query() actually exist, check to # see if the HEADER structure is defined ... cat > /tmp/ngc$$.c << EOF #include #include #include #include main() { HEADER hhh; res_init(); } EOF if $AC_CC -o /tmp/ngc$$ /tmp/ngc$$.c $__extralib; then TLOG " (found)" elif $AC_CC -DBIND_8_COMPAT -o /tmp/ngc$$ /tmp/ngc$$.c $__extralib; then TLOG " (bind9 with BIND_8_COMPAT)" AC_DEFINE BIND_8_COMPAT 1 else TLOG " (broken)" rm -f /tmp/ngc$$.c return 1 fi rm -f /tmp/ngc$$.c return 0 } # # AC_PROG_INSTALL finds the install program and guesses whether it's a # Berkeley or GNU install program # AC_PROG_INSTALL () { DEST=`acLookFor install` LOGN "checking for install" unset IS_BSD if [ "$DEST" ]; then # BSD install or GNU install? Let's find out... touch /tmp/a$$ $DEST /tmp/a$$ /tmp/b$$ if test -r /tmp/a$$; then LOG " ($DEST)" else IS_BSD=1 LOG " ($DEST) bsd install" fi rm -f /tmp/a$$ /tmp/b$$ else DEST=`acLookFor ginstall` if [ "$DEST" ]; then LOG " ($DEST)" else DEST="false" LOG " (not found)" fi fi if [ "$IS_BSD" ]; then PROG_INSTALL="$DEST -c" else PROG_INSTALL="$DEST" fi AC_SUB 'INSTALL' "$PROG_INSTALL" AC_SUB 'INSTALL_PROGRAM' "$PROG_INSTALL -s -m 755" AC_SUB 'INSTALL_DATA' "$PROG_INSTALL -m 444" # finally build a little directory installer # if mkdir -p works, use that, otherwise use install -d, # otherwise build a script to do it by hand. # in every case, test to see if the directory exists before # making it. if mkdir -p $$a/b; then # I like this method best. __mkdir="mkdir -p" rmdir $$a/b rmdir $$a elif $PROG_INSTALL -d $$a/b; then __mkdir="$PROG_INSTALL -d" rmdir $$a/b rmdir $$a fi __config_files="$__config_files config.md" AC_SUB 'INSTALL_DIR' "$__cwd/config.md" echo "#! /bin/sh" > $__cwd/config.md echo "# script generated" `date` "by configure.sh" >> $__cwd/config.md echo >> $__cwd/config.md if [ "$__mkdir" ]; then echo "test -d \"\$1\" || $__mkdir \"\$1\"" >> $__cwd/config.md echo "exit $?" >> $__cwd/config.md else cat - >> $__cwd/config.md << \EOD pieces=`IFS=/; for x in $1; do echo $x; done` dir= for x in $pieces; do dir="$dir$x" mkdir $dir || exit 1 dir="$dir/" done exit 0 EOD fi chmod +x $__cwd/config.md } # # acCheckCPP is a local that runs a C preprocessor with a given set of # compiler options # acCheckCPP () { cat > /tmp/ngc$$.c << EOF #define FOO BAR FOO EOF if $1 $2 /tmp/ngc$$.c > /tmp/ngc$$.o; then if grep -v '#define' /tmp/ngc$$.o | grep -s BAR >/dev/null; then echo "CPP=[$1], CPPFLAGS=[$2]" AC_SUB 'CPP' "$1" AC_SUB 'CPPFLAGS' "$2" rm /tmp/ngc$$.c /tmp/ngc$$.o return 0 fi fi rm /tmp/ngc$$.c /tmp/ngc$$.o return 1 } # # AC_PROG_CPP checks for cpp, then checks to see which CPPFLAGS are needed # to run it as a filter. # AC_PROG_CPP () { if [ "$AC_CPP_PROG" ]; then DEST=$AC_CPP_PROG else __ac_path="$AC_PATH" AC_PATH="/lib:/usr/lib:${__ac_path:-$ac_default_path}" DEST=`acLookFor cpp` AC_PATH="$__ac_path" fi unset fail LOGN "Looking for cpp" if [ "$DEST" ]; then TLOGN " ($DEST)" acCheckCPP $DEST "$CPPFLAGS" || \ acCheckCPP $DEST -traditional-cpp -E || \ acCheckCPP $DEST -E || \ acCheckCPP $DEST -traditional-cpp -pipe || \ acCheckCPP $DEST -pipe || fail=1 if [ "$fail" ]; then AC_FAIL " (can't run cpp as a pipeline)" else TLOG " ok" return 0 fi fi AC_FAIL " (not found)" } # # AC_FAIL spits out an error message, then __fail's AC_FAIL() { LOG "$*" $__fail 1 } # # AC_SUB writes a substitution into config.sub AC_SUB() { ( echononl "s;@$1@;" _subst=`echo $2 | sed -e 's/;/\\;/g'` echononl "$_subst" echo ';g' ) >> $__cwd/config.sub } # # AC_MAK writes a define into config.mak AC_MAK() { echo "HAVE_$1 = 1" >> $__cwd/config.mak } # # AC_DEFINE adds a #define to config.h AC_DEFINE() { local name="$1" value="${2:-1}" if ! printf -v "ac_defined_$name" '%s' "$value"; then AC_FATAL 'AC_DEFINE unable to set "ac_defined_$name" to "$value"' fi echo "#define $name $value" >> $__cwd/config.h.tmp } # # AC_INCLUDE adds a #include to config.h AC_INCLUDE() { echo "#include \"$1\"" >> $__cwd/config.h.tmp } # # AC_CONFIG adds a configuration setting to all the config files AC_CONFIG() { AC_DEFINE "PATH_$1" \""$2"\" AC_MAK "$1" AC_SUB "$1" "$2" } # # AC_QUIET does something quietly AC_QUIET() { eval $* 5>/dev/null } bup-0.29/configure000077500000000000000000000001671303127641400141460ustar00rootroot00000000000000#!/bin/sh if test "$#" -gt 0; then echo "Usage: configure" 1>&2 exit 1 fi cd config && exec ./configure "$@" bup-0.29/configure-version000077500000000000000000000026321303127641400156300ustar00rootroot00000000000000#!/usr/bin/env bash set -euo pipefail top="$(pwd)" readonly top usage() { echo 'Usage: ./configure-version [--update | --clean]' } update-cpy() { declare -r cpy=lib/bup/_checkout.py rm -f $cpy.tmp-$$ local hash date desc hash=$(git log -1 --pretty=format:%H) date=$(git log -1 --pretty=format:%ci) desc=$(git describe --always --match="[0-9]*") cat > $cpy.tmp-$$ <<-EOF COMMIT='$hash' NAMES='(tag: $desc)' DATE='$date' EOF if ! test -e $cpy || ! cmp -s $cpy $cpy.tmp-$$; then mv $cpy.tmp-$$ $cpy; fi rm -f $cpy.tmp-$$ } if test "$#" -ne 1; then usage 1>&2; exit 1 fi if ! test -f lib/bup/bupsplit.c; then echo 'error: cannot find bup source tree' 1>&2 exit 1 fi case "$1" in --update) rc=0 grep -q -F '$Format' lib/bup/_release.py || rc=$? case $rc in 0) update-cpy ;; 1) if test -d .git; then echo 'error: detected release, but found ./.git' 1>&2 exit 1 fi echo "Detected release tree; skipping version configuration" 1>&2 exit 0 ;; *) echo 'error: grep failed' 1>&2 exit 1 esac ;; --clean) rm -f lib/bup/_checkout.py lib/bup/_checkout.pyc lib/bup/_checkout.py.tmp-* ;; *) usage 1>&2; exit 1 ;; esac bup-0.29/lib/000077500000000000000000000000001303127641400130015ustar00rootroot00000000000000bup-0.29/lib/__init__.py000066400000000000000000000000001303127641400151000ustar00rootroot00000000000000bup-0.29/lib/bup/000077500000000000000000000000001303127641400135675ustar00rootroot00000000000000bup-0.29/lib/bup/.gitattributes000066400000000000000000000000321303127641400164550ustar00rootroot00000000000000_release.py export-subst bup-0.29/lib/bup/__init__.py000066400000000000000000000000001303127641400156660ustar00rootroot00000000000000bup-0.29/lib/bup/_helpers.c000066400000000000000000001353311303127641400155420ustar00rootroot00000000000000#define _LARGEFILE64_SOURCE 1 #define PY_SSIZE_T_CLEAN 1 #undef NDEBUG #include "../../config/config.h" // According to Python, its header has to go first: // http://docs.python.org/2/c-api/intro.html#include-files #include #include #include #include #include #include #include #include #include #ifdef HAVE_SYS_MMAN_H #include #endif #ifdef HAVE_SYS_TYPES_H #include #endif #ifdef HAVE_SYS_STAT_H #include #endif #ifdef HAVE_UNISTD_H #include #endif #ifdef HAVE_LINUX_FS_H #include #endif #ifdef HAVE_SYS_IOCTL_H #include #endif #ifdef HAVE_TM_TM_GMTOFF #include #endif #include "bupsplit.h" #if defined(FS_IOC_GETFLAGS) && defined(FS_IOC_SETFLAGS) #define BUP_HAVE_FILE_ATTRS 1 #endif /* * Check for incomplete UTIMENSAT support (NetBSD 6), and if so, * pretend we don't have it. */ #if !defined(AT_FDCWD) || !defined(AT_SYMLINK_NOFOLLOW) #undef HAVE_UTIMENSAT #endif #ifndef FS_NOCOW_FL // Of course, this assumes it's a bitfield value. #define FS_NOCOW_FL 0 #endif typedef unsigned char byte; static int istty2 = 0; #ifndef htonll // This function should technically be macro'd out if it's going to be used // more than ocasionally. As of this writing, it'll actually never be called // in real world bup scenarios (because our packs are < MAX_INT bytes). static uint64_t htonll(uint64_t value) { static const int endian_test = 42; if (*(char *)&endian_test == endian_test) // LSB-MSB return ((uint64_t)htonl(value & 0xFFFFFFFF) << 32) | htonl(value >> 32); return value; // already in network byte order MSB-LSB } #endif #define INTEGRAL_ASSIGNMENT_FITS(dest, src) \ ({ \ *(dest) = (src); \ *(dest) == (src) && (*(dest) < 1) == ((src) < 1); \ }) // At the moment any code that calls INTGER_TO_PY() will have to // disable -Wtautological-compare for clang. See below. #define INTEGER_TO_PY(x) \ (((x) >= 0) ? PyLong_FromUnsignedLongLong(x) : PyLong_FromLongLong(x)) static int bup_ulong_from_pyint(unsigned long *x, PyObject *py, const char *name) { const long tmp = PyInt_AsLong(py); if (tmp == -1 && PyErr_Occurred()) { if (PyErr_ExceptionMatches(PyExc_OverflowError)) PyErr_Format(PyExc_OverflowError, "%s too big for unsigned long", name); return 0; } if (tmp < 0) { PyErr_Format(PyExc_OverflowError, "negative %s cannot be converted to unsigned long", name); return 0; } *x = tmp; return 1; } static int bup_ulong_from_py(unsigned long *x, PyObject *py, const char *name) { if (PyInt_Check(py)) return bup_ulong_from_pyint(x, py, name); if (!PyLong_Check(py)) { PyErr_Format(PyExc_TypeError, "expected integer %s", name); return 0; } const unsigned long tmp = PyLong_AsUnsignedLong(py); if (PyErr_Occurred()) { if (PyErr_ExceptionMatches(PyExc_OverflowError)) PyErr_Format(PyExc_OverflowError, "%s too big for unsigned long", name); return 0; } *x = tmp; return 1; } static int bup_uint_from_py(unsigned int *x, PyObject *py, const char *name) { unsigned long tmp; if (!bup_ulong_from_py(&tmp, py, name)) return 0; if (tmp > UINT_MAX) { PyErr_Format(PyExc_OverflowError, "%s too big for unsigned int", name); return 0; } *x = tmp; return 1; } static int bup_ullong_from_py(unsigned PY_LONG_LONG *x, PyObject *py, const char *name) { if (PyInt_Check(py)) { unsigned long tmp; if (bup_ulong_from_pyint(&tmp, py, name)) { *x = tmp; return 1; } return 0; } if (!PyLong_Check(py)) { PyErr_Format(PyExc_TypeError, "integer argument expected for %s", name); return 0; } const unsigned PY_LONG_LONG tmp = PyLong_AsUnsignedLongLong(py); if (tmp == (unsigned long long) -1 && PyErr_Occurred()) { if (PyErr_ExceptionMatches(PyExc_OverflowError)) PyErr_Format(PyExc_OverflowError, "%s too big for unsigned long long", name); return 0; } *x = tmp; return 1; } // Probably we should use autoconf or something and set HAVE_PY_GETARGCARGV... #if __WIN32__ || __CYGWIN__ // There's no 'ps' on win32 anyway, and Py_GetArgcArgv() isn't available. static void unpythonize_argv(void) { } #else // not __WIN32__ // For some reason this isn't declared in Python.h extern void Py_GetArgcArgv(int *argc, char ***argv); static void unpythonize_argv(void) { int argc, i; char **argv, *arge; Py_GetArgcArgv(&argc, &argv); for (i = 0; i < argc-1; i++) { if (argv[i] + strlen(argv[i]) + 1 != argv[i+1]) { // The argv block doesn't work the way we expected; it's unsafe // to mess with it. return; } } arge = argv[argc-1] + strlen(argv[argc-1]) + 1; if (strstr(argv[0], "python") && argv[1] == argv[0] + strlen(argv[0]) + 1) { char *p; size_t len, diff; p = strrchr(argv[1], '/'); if (p) { p++; diff = p - argv[0]; len = arge - p; memmove(argv[0], p, len); memset(arge - diff, 0, diff); for (i = 0; i < argc; i++) argv[i] = argv[i+1] ? argv[i+1]-diff : NULL; } } } #endif // not __WIN32__ or __CYGWIN__ static int write_all(int fd, const void *buf, const size_t count) { size_t written = 0; while (written < count) { const ssize_t rc = write(fd, buf + written, count - written); if (rc == -1) return -1; written += rc; } return 0; } static int uadd(unsigned long long *dest, const unsigned long long x, const unsigned long long y) { const unsigned long long result = x + y; if (result < x || result < y) return 0; *dest = result; return 1; } static PyObject *append_sparse_region(const int fd, unsigned long long n) { while (n) { off_t new_off; if (!INTEGRAL_ASSIGNMENT_FITS(&new_off, n)) new_off = INT_MAX; const off_t off = lseek(fd, new_off, SEEK_CUR); if (off == (off_t) -1) return PyErr_SetFromErrno(PyExc_IOError); n -= new_off; } return NULL; } static PyObject *record_sparse_zeros(unsigned long long *new_pending, const int fd, unsigned long long prev_pending, const unsigned long long count) { // Add count additional sparse zeros to prev_pending and store the // result in new_pending, or if the total won't fit in // new_pending, write some of the zeros to fd sparsely, and store // the remaining sum in new_pending. if (!uadd(new_pending, prev_pending, count)) { PyObject *err = append_sparse_region(fd, prev_pending); if (err != NULL) return err; *new_pending = count; } return NULL; } static byte* find_not_zero(const byte * const start, const byte * const end) { // Return a pointer to first non-zero byte between start and end, // or end if there isn't one. assert(start <= end); const unsigned char *cur = start; while (cur < end && *cur == 0) cur++; return (byte *) cur; } static byte* find_trailing_zeros(const byte * const start, const byte * const end) { // Return a pointer to the start of any trailing run of zeros, or // end if there isn't one. assert(start <= end); if (start == end) return (byte *) end; const byte * cur = end; while (cur > start && *--cur == 0) {} if (*cur == 0) return (byte *) cur; else return (byte *) (cur + 1); } static byte *find_non_sparse_end(const byte * const start, const byte * const end, const unsigned long long min_len) { // Return the first pointer to a min_len sparse block in [start, // end) if there is one, otherwise a pointer to the start of any // trailing run of zeros. If there are no trailing zeros, return // end. if (start == end) return (byte *) end; assert(start < end); assert(min_len); // Probe in min_len jumps, searching backward from the jump // destination for a non-zero byte. If such a byte is found, move // just past it and try again. const byte *candidate = start; // End of any run of zeros, starting at candidate, that we've already seen const byte *end_of_known_zeros = candidate; while (end - candidate >= min_len) // Handle all min_len candidate blocks { const byte * const probe_end = candidate + min_len; const byte * const trailing_zeros = find_trailing_zeros(end_of_known_zeros, probe_end); if (trailing_zeros == probe_end) end_of_known_zeros = candidate = probe_end; else if (trailing_zeros == end_of_known_zeros) { assert(candidate >= start); assert(candidate <= end); assert(*candidate == 0); return (byte *) candidate; } else { candidate = trailing_zeros; end_of_known_zeros = probe_end; } } if (candidate == end) return (byte *) end; // No min_len sparse run found, search backward from end const byte * const trailing_zeros = find_trailing_zeros(end_of_known_zeros, end); if (trailing_zeros == end_of_known_zeros) { assert(candidate >= start); assert(candidate < end); assert(*candidate == 0); assert(end - candidate < min_len); return (byte *) candidate; } if (trailing_zeros == end) { assert(*(end - 1) != 0); return (byte *) end; } assert(end - trailing_zeros < min_len); assert(trailing_zeros >= start); assert(trailing_zeros < end); assert(*trailing_zeros == 0); return (byte *) trailing_zeros; } static PyObject *bup_write_sparsely(PyObject *self, PyObject *args) { int fd; unsigned char *buf = NULL; Py_ssize_t sbuf_len; PyObject *py_min_sparse_len, *py_prev_sparse_len; if (!PyArg_ParseTuple(args, "it#OO", &fd, &buf, &sbuf_len, &py_min_sparse_len, &py_prev_sparse_len)) return NULL; unsigned long long min_sparse_len, prev_sparse_len, buf_len; if (!bup_ullong_from_py(&min_sparse_len, py_min_sparse_len, "min_sparse_len")) return NULL; if (!bup_ullong_from_py(&prev_sparse_len, py_prev_sparse_len, "prev_sparse_len")) return NULL; if (sbuf_len < 0) return PyErr_Format(PyExc_ValueError, "negative bufer length"); if (!INTEGRAL_ASSIGNMENT_FITS(&buf_len, sbuf_len)) return PyErr_Format(PyExc_OverflowError, "buffer length too large"); const byte * block = buf; // Start of pending block const byte * const end = buf + buf_len; unsigned long long zeros = prev_sparse_len; while (1) { assert(block <= end); if (block == end) return PyLong_FromUnsignedLongLong(zeros); if (*block != 0) { // Look for the end of block, i.e. the next sparse run of // at least min_sparse_len zeros, or the end of the // buffer. const byte * const probe = find_non_sparse_end(block + 1, end, min_sparse_len); // Either at end of block, or end of non-sparse; write pending data PyObject *err = append_sparse_region(fd, zeros); if (err != NULL) return err; int rc = write_all(fd, block, probe - block); if (rc) return PyErr_SetFromErrno(PyExc_IOError); if (end - probe < min_sparse_len) zeros = end - probe; else zeros = min_sparse_len; block = probe + zeros; } else // *block == 0 { // Should be in the first loop iteration, a sparse run of // zeros, or nearly at the end of the block (within // min_sparse_len). const byte * const zeros_end = find_not_zero(block, end); PyObject *err = record_sparse_zeros(&zeros, fd, zeros, zeros_end - block); if (err != NULL) return err; assert(block <= zeros_end); block = zeros_end; } } } static PyObject *selftest(PyObject *self, PyObject *args) { if (!PyArg_ParseTuple(args, "")) return NULL; return Py_BuildValue("i", !bupsplit_selftest()); } static PyObject *blobbits(PyObject *self, PyObject *args) { if (!PyArg_ParseTuple(args, "")) return NULL; return Py_BuildValue("i", BUP_BLOBBITS); } static PyObject *splitbuf(PyObject *self, PyObject *args) { unsigned char *buf = NULL; Py_ssize_t len = 0; int out = 0, bits = -1; if (!PyArg_ParseTuple(args, "t#", &buf, &len)) return NULL; assert(len <= INT_MAX); out = bupsplit_find_ofs(buf, len, &bits); if (out) assert(bits >= BUP_BLOBBITS); return Py_BuildValue("ii", out, bits); } static PyObject *bitmatch(PyObject *self, PyObject *args) { unsigned char *buf1 = NULL, *buf2 = NULL; Py_ssize_t len1 = 0, len2 = 0; Py_ssize_t byte; int bit; if (!PyArg_ParseTuple(args, "t#t#", &buf1, &len1, &buf2, &len2)) return NULL; bit = 0; for (byte = 0; byte < len1 && byte < len2; byte++) { int b1 = buf1[byte], b2 = buf2[byte]; if (b1 != b2) { for (bit = 0; bit < 8; bit++) if ( (b1 & (0x80 >> bit)) != (b2 & (0x80 >> bit)) ) break; break; } } assert(byte <= (INT_MAX >> 3)); return Py_BuildValue("i", byte*8 + bit); } static PyObject *firstword(PyObject *self, PyObject *args) { unsigned char *buf = NULL; Py_ssize_t len = 0; uint32_t v; if (!PyArg_ParseTuple(args, "t#", &buf, &len)) return NULL; if (len < 4) return NULL; v = ntohl(*(uint32_t *)buf); return PyLong_FromUnsignedLong(v); } #define BLOOM2_HEADERLEN 16 static void to_bloom_address_bitmask4(const unsigned char *buf, const int nbits, uint64_t *v, unsigned char *bitmask) { int bit; uint32_t high; uint64_t raw, mask; memcpy(&high, buf, 4); mask = (1<> (37-nbits)) & 0x7; *v = (raw >> (40-nbits)) & mask; *bitmask = 1 << bit; } static void to_bloom_address_bitmask5(const unsigned char *buf, const int nbits, uint32_t *v, unsigned char *bitmask) { int bit; uint32_t high; uint32_t raw, mask; memcpy(&high, buf, 4); mask = (1<> (29-nbits)) & 0x7; *v = (raw >> (32-nbits)) & mask; *bitmask = 1 << bit; } #define BLOOM_SET_BIT(name, address, otype) \ static void name(unsigned char *bloom, const unsigned char *buf, const int nbits)\ {\ unsigned char bitmask;\ otype v;\ address(buf, nbits, &v, &bitmask);\ bloom[BLOOM2_HEADERLEN+v] |= bitmask;\ } BLOOM_SET_BIT(bloom_set_bit4, to_bloom_address_bitmask4, uint64_t) BLOOM_SET_BIT(bloom_set_bit5, to_bloom_address_bitmask5, uint32_t) #define BLOOM_GET_BIT(name, address, otype) \ static int name(const unsigned char *bloom, const unsigned char *buf, const int nbits)\ {\ unsigned char bitmask;\ otype v;\ address(buf, nbits, &v, &bitmask);\ return bloom[BLOOM2_HEADERLEN+v] & bitmask;\ } BLOOM_GET_BIT(bloom_get_bit4, to_bloom_address_bitmask4, uint64_t) BLOOM_GET_BIT(bloom_get_bit5, to_bloom_address_bitmask5, uint32_t) static PyObject *bloom_add(PyObject *self, PyObject *args) { unsigned char *sha = NULL, *bloom = NULL; unsigned char *end; Py_ssize_t len = 0, blen = 0; int nbits = 0, k = 0; if (!PyArg_ParseTuple(args, "w#s#ii", &bloom, &blen, &sha, &len, &nbits, &k)) return NULL; if (blen < 16+(1< 29) return NULL; for (end = sha + len; sha < end; sha += 20/k) bloom_set_bit5(bloom, sha, nbits); } else if (k == 4) { if (nbits > 37) return NULL; for (end = sha + len; sha < end; sha += 20/k) bloom_set_bit4(bloom, sha, nbits); } else return NULL; return Py_BuildValue("n", len/20); } static PyObject *bloom_contains(PyObject *self, PyObject *args) { unsigned char *sha = NULL, *bloom = NULL; Py_ssize_t len = 0, blen = 0; int nbits = 0, k = 0; unsigned char *end; int steps; if (!PyArg_ParseTuple(args, "t#s#ii", &bloom, &blen, &sha, &len, &nbits, &k)) return NULL; if (len != 20) return NULL; if (k == 5) { if (nbits > 29) return NULL; for (steps = 1, end = sha + 20; sha < end; sha += 20/k, steps++) if (!bloom_get_bit5(bloom, sha, nbits)) return Py_BuildValue("Oi", Py_None, steps); } else if (k == 4) { if (nbits > 37) return NULL; for (steps = 1, end = sha + 20; sha < end; sha += 20/k, steps++) if (!bloom_get_bit4(bloom, sha, nbits)) return Py_BuildValue("Oi", Py_None, steps); } else return NULL; return Py_BuildValue("ii", 1, k); } static uint32_t _extract_bits(unsigned char *buf, int nbits) { uint32_t v, mask; mask = (1<> (32-nbits)) & mask; return v; } static PyObject *extract_bits(PyObject *self, PyObject *args) { unsigned char *buf = NULL; Py_ssize_t len = 0; int nbits = 0; if (!PyArg_ParseTuple(args, "t#i", &buf, &len, &nbits)) return NULL; if (len < 4) return NULL; return PyLong_FromUnsignedLong(_extract_bits(buf, nbits)); } struct sha { unsigned char bytes[20]; }; struct idx { unsigned char *map; struct sha *cur; struct sha *end; uint32_t *cur_name; Py_ssize_t bytes; int name_base; }; static int _cmp_sha(const struct sha *sha1, const struct sha *sha2) { int i; for (i = 0; i < sizeof(struct sha); i++) if (sha1->bytes[i] != sha2->bytes[i]) return sha1->bytes[i] - sha2->bytes[i]; return 0; } static void _fix_idx_order(struct idx **idxs, int *last_i) { struct idx *idx; int low, mid, high, c = 0; idx = idxs[*last_i]; if (idxs[*last_i]->cur >= idxs[*last_i]->end) { idxs[*last_i] = NULL; PyMem_Free(idx); --*last_i; return; } if (*last_i == 0) return; low = *last_i-1; mid = *last_i; high = 0; while (low >= high) { mid = (low + high) / 2; c = _cmp_sha(idx->cur, idxs[mid]->cur); if (c < 0) high = mid + 1; else if (c > 0) low = mid - 1; else break; } if (c < 0) ++mid; if (mid == *last_i) return; memmove(&idxs[mid+1], &idxs[mid], (*last_i-mid)*sizeof(struct idx *)); idxs[mid] = idx; } static uint32_t _get_idx_i(struct idx *idx) { if (idx->cur_name == NULL) return idx->name_base; return ntohl(*idx->cur_name) + idx->name_base; } #define MIDX4_HEADERLEN 12 static PyObject *merge_into(PyObject *self, PyObject *args) { PyObject *py_total, *ilist = NULL; unsigned char *fmap = NULL; struct sha *sha_ptr, *sha_start = NULL; uint32_t *table_ptr, *name_ptr, *name_start; struct idx **idxs = NULL; Py_ssize_t flen = 0; int bits = 0, i; unsigned int total; uint32_t count, prefix; int num_i; int last_i; if (!PyArg_ParseTuple(args, "w#iOO", &fmap, &flen, &bits, &py_total, &ilist)) return NULL; if (!bup_uint_from_py(&total, py_total, "total")) return NULL; num_i = PyList_Size(ilist); idxs = (struct idx **)PyMem_Malloc(num_i * sizeof(struct idx *)); for (i = 0; i < num_i; i++) { long len, sha_ofs, name_map_ofs; idxs[i] = (struct idx *)PyMem_Malloc(sizeof(struct idx)); PyObject *itup = PyList_GetItem(ilist, i); if (!PyArg_ParseTuple(itup, "t#llli", &idxs[i]->map, &idxs[i]->bytes, &len, &sha_ofs, &name_map_ofs, &idxs[i]->name_base)) return NULL; idxs[i]->cur = (struct sha *)&idxs[i]->map[sha_ofs]; idxs[i]->end = &idxs[i]->cur[len]; if (name_map_ofs) idxs[i]->cur_name = (uint32_t *)&idxs[i]->map[name_map_ofs]; else idxs[i]->cur_name = NULL; } table_ptr = (uint32_t *)&fmap[MIDX4_HEADERLEN]; sha_start = sha_ptr = (struct sha *)&table_ptr[1<= 0) { struct idx *idx; uint32_t new_prefix; if (count % 102424 == 0 && istty2) fprintf(stderr, "midx: writing %.2f%% (%d/%d)\r", count*100.0/total, count, total); idx = idxs[last_i]; new_prefix = _extract_bits((unsigned char *)idx->cur, bits); while (prefix < new_prefix) table_ptr[prefix++] = htonl(count); memcpy(sha_ptr++, idx->cur, sizeof(struct sha)); *name_ptr++ = htonl(_get_idx_i(idx)); ++idx->cur; if (idx->cur_name != NULL) ++idx->cur_name; _fix_idx_order(idxs, &last_i); ++count; } while (prefix < (1< 0x7fffffff) { *ofs64_ptr++ = htonll(ofs); ofs = 0x80000000 | ofs64_count++; } *ofs_ptr++ = htonl((uint32_t)ofs); } } int rc = msync(fmap, flen, MS_ASYNC); if (rc != 0) return PyErr_SetFromErrnoWithFilename(PyExc_IOError, filename); return PyLong_FromUnsignedLong(count); } // I would have made this a lower-level function that just fills in a buffer // with random values, and then written those values from python. But that's // about 20% slower in my tests, and since we typically generate random // numbers for benchmarking other parts of bup, any slowness in generating // random bytes will make our benchmarks inaccurate. Plus nobody wants // pseudorandom bytes much except for this anyway. static PyObject *write_random(PyObject *self, PyObject *args) { uint32_t buf[1024/4]; int fd = -1, seed = 0, verbose = 0; ssize_t ret; long long len = 0, kbytes = 0, written = 0; if (!PyArg_ParseTuple(args, "iLii", &fd, &len, &seed, &verbose)) return NULL; srandom(seed); for (kbytes = 0; kbytes < len/1024; kbytes++) { unsigned i; for (i = 0; i < sizeof(buf)/sizeof(buf[0]); i++) buf[i] = random(); ret = write(fd, buf, sizeof(buf)); if (ret < 0) ret = 0; written += ret; if (ret < (int)sizeof(buf)) break; if (verbose && kbytes/1024 > 0 && !(kbytes%1024)) fprintf(stderr, "Random: %lld Mbytes\r", kbytes/1024); } // handle non-multiples of 1024 if (len % 1024) { unsigned i; for (i = 0; i < sizeof(buf)/sizeof(buf[0]); i++) buf[i] = random(); ret = write(fd, buf, len % 1024); if (ret < 0) ret = 0; written += ret; } if (kbytes/1024 > 0) fprintf(stderr, "Random: %lld Mbytes, done.\n", kbytes/1024); return Py_BuildValue("L", written); } static PyObject *random_sha(PyObject *self, PyObject *args) { static int seeded = 0; uint32_t shabuf[20/4]; int i; if (!seeded) { assert(sizeof(shabuf) == 20); srandom(time(NULL)); seeded = 1; } if (!PyArg_ParseTuple(args, "")) return NULL; memset(shabuf, 0, sizeof(shabuf)); for (i=0; i < 20/4; i++) shabuf[i] = random(); return Py_BuildValue("s#", shabuf, 20); } static int _open_noatime(const char *filename, int attrs) { int attrs_noatime, fd; attrs |= O_RDONLY; #ifdef O_NOFOLLOW attrs |= O_NOFOLLOW; #endif #ifdef O_LARGEFILE attrs |= O_LARGEFILE; #endif attrs_noatime = attrs; #ifdef O_NOATIME attrs_noatime |= O_NOATIME; #endif fd = open(filename, attrs_noatime); if (fd < 0 && errno == EPERM) { // older Linux kernels would return EPERM if you used O_NOATIME // and weren't the file's owner. This pointless restriction was // relaxed eventually, but we have to handle it anyway. // (VERY old kernels didn't recognized O_NOATIME, but they would // just harmlessly ignore it, so this branch won't trigger) fd = open(filename, attrs); } return fd; } static PyObject *open_noatime(PyObject *self, PyObject *args) { char *filename = NULL; int fd; if (!PyArg_ParseTuple(args, "s", &filename)) return NULL; fd = _open_noatime(filename, 0); if (fd < 0) return PyErr_SetFromErrnoWithFilename(PyExc_OSError, filename); return Py_BuildValue("i", fd); } static PyObject *fadvise_done(PyObject *self, PyObject *args) { int fd = -1; long long llofs, lllen = 0; if (!PyArg_ParseTuple(args, "iLL", &fd, &llofs, &lllen)) return NULL; off_t ofs, len; if (!INTEGRAL_ASSIGNMENT_FITS(&ofs, llofs)) return PyErr_Format(PyExc_OverflowError, "fadvise offset overflows off_t"); if (!INTEGRAL_ASSIGNMENT_FITS(&len, lllen)) return PyErr_Format(PyExc_OverflowError, "fadvise length overflows off_t"); #ifdef POSIX_FADV_DONTNEED posix_fadvise(fd, ofs, len, POSIX_FADV_DONTNEED); #endif return Py_BuildValue(""); } // Currently the Linux kernel and FUSE disagree over the type for // FS_IOC_GETFLAGS and FS_IOC_SETFLAGS. The kernel actually uses int, // but FUSE chose long (matching the declaration in linux/fs.h). So // if you use int, and then traverse a FUSE filesystem, you may // corrupt the stack. But if you use long, then you may get invalid // results on big-endian systems. // // For now, we just use long, and then disable Linux attrs entirely // (with a warning) in helpers.py on systems that are affected. #ifdef BUP_HAVE_FILE_ATTRS static PyObject *bup_get_linux_file_attr(PyObject *self, PyObject *args) { int rc; unsigned long attr; char *path; int fd; if (!PyArg_ParseTuple(args, "s", &path)) return NULL; fd = _open_noatime(path, O_NONBLOCK); if (fd == -1) return PyErr_SetFromErrnoWithFilename(PyExc_OSError, path); attr = 0; // Handle int/long mismatch (see above) rc = ioctl(fd, FS_IOC_GETFLAGS, &attr); if (rc == -1) { close(fd); return PyErr_SetFromErrnoWithFilename(PyExc_OSError, path); } close(fd); assert(attr <= UINT_MAX); // Kernel type is actually int return PyLong_FromUnsignedLong(attr); } #endif /* def BUP_HAVE_FILE_ATTRS */ #ifdef BUP_HAVE_FILE_ATTRS static PyObject *bup_set_linux_file_attr(PyObject *self, PyObject *args) { int rc; unsigned long orig_attr; unsigned int attr; char *path; PyObject *py_attr; int fd; if (!PyArg_ParseTuple(args, "sO", &path, &py_attr)) return NULL; if (!bup_uint_from_py(&attr, py_attr, "attr")) return NULL; fd = open(path, O_RDONLY | O_NONBLOCK | O_LARGEFILE | O_NOFOLLOW); if (fd == -1) return PyErr_SetFromErrnoWithFilename(PyExc_OSError, path); // Restrict attr to modifiable flags acdeijstuADST -- see // chattr(1) and the e2fsprogs source. Letter to flag mapping is // in pf.c flags_array[]. attr &= FS_APPEND_FL | FS_COMPR_FL | FS_NODUMP_FL | FS_EXTENT_FL | FS_IMMUTABLE_FL | FS_JOURNAL_DATA_FL | FS_SECRM_FL | FS_NOTAIL_FL | FS_UNRM_FL | FS_NOATIME_FL | FS_DIRSYNC_FL | FS_SYNC_FL | FS_TOPDIR_FL | FS_NOCOW_FL; // The extents flag can't be removed, so don't (see chattr(1) and chattr.c). orig_attr = 0; // Handle int/long mismatch (see above) rc = ioctl(fd, FS_IOC_GETFLAGS, &orig_attr); if (rc == -1) { close(fd); return PyErr_SetFromErrnoWithFilename(PyExc_OSError, path); } assert(orig_attr <= UINT_MAX); // Kernel type is actually int attr |= ((unsigned int) orig_attr) & FS_EXTENT_FL; rc = ioctl(fd, FS_IOC_SETFLAGS, &attr); if (rc == -1) { close(fd); return PyErr_SetFromErrnoWithFilename(PyExc_OSError, path); } close(fd); return Py_BuildValue("O", Py_None); } #endif /* def BUP_HAVE_FILE_ATTRS */ #ifndef HAVE_UTIMENSAT #ifndef HAVE_UTIMES #error "cannot find utimensat or utimes()" #endif #ifndef HAVE_LUTIMES #error "cannot find utimensat or lutimes()" #endif #endif #define ASSIGN_PYLONG_TO_INTEGRAL(dest, pylong, overflow) \ ({ \ int result = 0; \ *(overflow) = 0; \ const long long lltmp = PyLong_AsLongLong(pylong); \ if (lltmp == -1 && PyErr_Occurred()) \ { \ if (PyErr_ExceptionMatches(PyExc_OverflowError)) \ { \ const unsigned long long ulltmp = PyLong_AsUnsignedLongLong(pylong); \ if (ulltmp == (unsigned long long) -1 && PyErr_Occurred()) \ { \ if (PyErr_ExceptionMatches(PyExc_OverflowError)) \ { \ PyErr_Clear(); \ *(overflow) = 1; \ } \ } \ if (INTEGRAL_ASSIGNMENT_FITS((dest), ulltmp)) \ result = 1; \ else \ *(overflow) = 1; \ } \ } \ else \ { \ if (INTEGRAL_ASSIGNMENT_FITS((dest), lltmp)) \ result = 1; \ else \ *(overflow) = 1; \ } \ result; \ }) #ifdef HAVE_UTIMENSAT static PyObject *bup_utimensat(PyObject *self, PyObject *args) { int rc; int fd, flag; char *path; PyObject *access_py, *modification_py; struct timespec ts[2]; if (!PyArg_ParseTuple(args, "is((Ol)(Ol))i", &fd, &path, &access_py, &(ts[0].tv_nsec), &modification_py, &(ts[1].tv_nsec), &flag)) return NULL; int overflow; if (!ASSIGN_PYLONG_TO_INTEGRAL(&(ts[0].tv_sec), access_py, &overflow)) { if (overflow) PyErr_SetString(PyExc_ValueError, "unable to convert access time seconds for utimensat"); return NULL; } if (!ASSIGN_PYLONG_TO_INTEGRAL(&(ts[1].tv_sec), modification_py, &overflow)) { if (overflow) PyErr_SetString(PyExc_ValueError, "unable to convert modification time seconds for utimensat"); return NULL; } rc = utimensat(fd, path, ts, flag); if (rc != 0) return PyErr_SetFromErrnoWithFilename(PyExc_OSError, path); return Py_BuildValue("O", Py_None); } #endif /* def HAVE_UTIMENSAT */ #if defined(HAVE_UTIMES) || defined(HAVE_LUTIMES) static int bup_parse_xutimes_args(char **path, struct timeval tv[2], PyObject *args) { PyObject *access_py, *modification_py; long long access_us, modification_us; // POSIX guarantees tv_usec is signed. if (!PyArg_ParseTuple(args, "s((OL)(OL))", path, &access_py, &access_us, &modification_py, &modification_us)) return 0; int overflow; if (!ASSIGN_PYLONG_TO_INTEGRAL(&(tv[0].tv_sec), access_py, &overflow)) { if (overflow) PyErr_SetString(PyExc_ValueError, "unable to convert access time seconds to timeval"); return 0; } if (!INTEGRAL_ASSIGNMENT_FITS(&(tv[0].tv_usec), access_us)) { PyErr_SetString(PyExc_ValueError, "unable to convert access time nanoseconds to timeval"); return 0; } if (!ASSIGN_PYLONG_TO_INTEGRAL(&(tv[1].tv_sec), modification_py, &overflow)) { if (overflow) PyErr_SetString(PyExc_ValueError, "unable to convert modification time seconds to timeval"); return 0; } if (!INTEGRAL_ASSIGNMENT_FITS(&(tv[1].tv_usec), modification_us)) { PyErr_SetString(PyExc_ValueError, "unable to convert modification time nanoseconds to timeval"); return 0; } return 1; } #endif /* defined(HAVE_UTIMES) || defined(HAVE_LUTIMES) */ #ifdef HAVE_UTIMES static PyObject *bup_utimes(PyObject *self, PyObject *args) { char *path; struct timeval tv[2]; if (!bup_parse_xutimes_args(&path, tv, args)) return NULL; int rc = utimes(path, tv); if (rc != 0) return PyErr_SetFromErrnoWithFilename(PyExc_OSError, path); return Py_BuildValue("O", Py_None); } #endif /* def HAVE_UTIMES */ #ifdef HAVE_LUTIMES static PyObject *bup_lutimes(PyObject *self, PyObject *args) { char *path; struct timeval tv[2]; if (!bup_parse_xutimes_args(&path, tv, args)) return NULL; int rc = lutimes(path, tv); if (rc != 0) return PyErr_SetFromErrnoWithFilename(PyExc_OSError, path); return Py_BuildValue("O", Py_None); } #endif /* def HAVE_LUTIMES */ #ifdef HAVE_STAT_ST_ATIM # define BUP_STAT_ATIME_NS(st) (st)->st_atim.tv_nsec # define BUP_STAT_MTIME_NS(st) (st)->st_mtim.tv_nsec # define BUP_STAT_CTIME_NS(st) (st)->st_ctim.tv_nsec #elif defined HAVE_STAT_ST_ATIMENSEC # define BUP_STAT_ATIME_NS(st) (st)->st_atimespec.tv_nsec # define BUP_STAT_MTIME_NS(st) (st)->st_mtimespec.tv_nsec # define BUP_STAT_CTIME_NS(st) (st)->st_ctimespec.tv_nsec #else # define BUP_STAT_ATIME_NS(st) 0 # define BUP_STAT_MTIME_NS(st) 0 # define BUP_STAT_CTIME_NS(st) 0 #endif #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wtautological-compare" // For INTEGER_TO_PY(). static PyObject *stat_struct_to_py(const struct stat *st, const char *filename, int fd) { // We can check the known (via POSIX) signed and unsigned types at // compile time, but not (easily) the unspecified types, so handle // those via INTEGER_TO_PY(). Assumes ns values will fit in a // long. return Py_BuildValue("OKOOOOOL(Ol)(Ol)(Ol)", INTEGER_TO_PY(st->st_mode), (unsigned PY_LONG_LONG) st->st_ino, INTEGER_TO_PY(st->st_dev), INTEGER_TO_PY(st->st_nlink), INTEGER_TO_PY(st->st_uid), INTEGER_TO_PY(st->st_gid), INTEGER_TO_PY(st->st_rdev), (PY_LONG_LONG) st->st_size, INTEGER_TO_PY(st->st_atime), (long) BUP_STAT_ATIME_NS(st), INTEGER_TO_PY(st->st_mtime), (long) BUP_STAT_MTIME_NS(st), INTEGER_TO_PY(st->st_ctime), (long) BUP_STAT_CTIME_NS(st)); } #pragma clang diagnostic pop // ignored "-Wtautological-compare" static PyObject *bup_stat(PyObject *self, PyObject *args) { int rc; char *filename; if (!PyArg_ParseTuple(args, "s", &filename)) return NULL; struct stat st; rc = stat(filename, &st); if (rc != 0) return PyErr_SetFromErrnoWithFilename(PyExc_OSError, filename); return stat_struct_to_py(&st, filename, 0); } static PyObject *bup_lstat(PyObject *self, PyObject *args) { int rc; char *filename; if (!PyArg_ParseTuple(args, "s", &filename)) return NULL; struct stat st; rc = lstat(filename, &st); if (rc != 0) return PyErr_SetFromErrnoWithFilename(PyExc_OSError, filename); return stat_struct_to_py(&st, filename, 0); } static PyObject *bup_fstat(PyObject *self, PyObject *args) { int rc, fd; if (!PyArg_ParseTuple(args, "i", &fd)) return NULL; struct stat st; rc = fstat(fd, &st); if (rc != 0) return PyErr_SetFromErrno(PyExc_OSError); return stat_struct_to_py(&st, NULL, fd); } #ifdef HAVE_TM_TM_GMTOFF static PyObject *bup_localtime(PyObject *self, PyObject *args) { long long lltime; time_t ttime; if (!PyArg_ParseTuple(args, "L", &lltime)) return NULL; if (!INTEGRAL_ASSIGNMENT_FITS(&ttime, lltime)) return PyErr_Format(PyExc_OverflowError, "time value too large"); struct tm tm; tzset(); if(localtime_r(&ttime, &tm) == NULL) return PyErr_SetFromErrno(PyExc_OSError); // Match the Python struct_time values. return Py_BuildValue("[i,i,i,i,i,i,i,i,i,i,s]", 1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday, tm.tm_hour, tm.tm_min, tm.tm_sec, tm.tm_wday, tm.tm_yday + 1, tm.tm_isdst, tm.tm_gmtoff, tm.tm_zone); } #endif /* def HAVE_TM_TM_GMTOFF */ #ifdef BUP_MINCORE_BUF_TYPE static PyObject *bup_mincore(PyObject *self, PyObject *args) { const char *src; Py_ssize_t src_ssize; Py_buffer dest; PyObject *py_src_n, *py_src_off, *py_dest_off; if (!PyArg_ParseTuple(args, "s#OOw*O", &src, &src_ssize, &py_src_n, &py_src_off, &dest, &py_dest_off)) return NULL; unsigned long long src_size, src_n, src_off, dest_size, dest_off; if (!(bup_ullong_from_py(&src_n, py_src_n, "src_n") && bup_ullong_from_py(&src_off, py_src_off, "src_off") && bup_ullong_from_py(&dest_off, py_dest_off, "dest_off"))) return NULL; if (!INTEGRAL_ASSIGNMENT_FITS(&src_size, src_ssize)) return PyErr_Format(PyExc_OverflowError, "invalid src size"); unsigned long long src_region_end; if (!uadd(&src_region_end, src_off, src_n)) return PyErr_Format(PyExc_OverflowError, "(src_off + src_n) too large"); if (src_region_end > src_size) return PyErr_Format(PyExc_OverflowError, "region runs off end of src"); if (!INTEGRAL_ASSIGNMENT_FITS(&dest_size, dest.len)) return PyErr_Format(PyExc_OverflowError, "invalid dest size"); if (dest_off > dest_size) return PyErr_Format(PyExc_OverflowError, "region runs off end of dest"); size_t length; if (!INTEGRAL_ASSIGNMENT_FITS(&length, src_n)) return PyErr_Format(PyExc_OverflowError, "src_n overflows size_t"); int rc = mincore((void *)(src + src_off), src_n, (BUP_MINCORE_BUF_TYPE *) (dest.buf + dest_off)); if (rc != 0) return PyErr_SetFromErrno(PyExc_OSError); return Py_BuildValue("O", Py_None); } #endif /* def BUP_MINCORE_BUF_TYPE */ static PyMethodDef helper_methods[] = { { "write_sparsely", bup_write_sparsely, METH_VARARGS, "Write buf excepting zeros at the end. Return trailing zero count." }, { "selftest", selftest, METH_VARARGS, "Check that the rolling checksum rolls correctly (for unit tests)." }, { "blobbits", blobbits, METH_VARARGS, "Return the number of bits in the rolling checksum." }, { "splitbuf", splitbuf, METH_VARARGS, "Split a list of strings based on a rolling checksum." }, { "bitmatch", bitmatch, METH_VARARGS, "Count the number of matching prefix bits between two strings." }, { "firstword", firstword, METH_VARARGS, "Return an int corresponding to the first 32 bits of buf." }, { "bloom_contains", bloom_contains, METH_VARARGS, "Check if a bloom filter of 2^nbits bytes contains an object" }, { "bloom_add", bloom_add, METH_VARARGS, "Add an object to a bloom filter of 2^nbits bytes" }, { "extract_bits", extract_bits, METH_VARARGS, "Take the first 'nbits' bits from 'buf' and return them as an int." }, { "merge_into", merge_into, METH_VARARGS, "Merges a bunch of idx and midx files into a single midx." }, { "write_idx", write_idx, METH_VARARGS, "Write a PackIdxV2 file from an idx list of lists of tuples" }, { "write_random", write_random, METH_VARARGS, "Write random bytes to the given file descriptor" }, { "random_sha", random_sha, METH_VARARGS, "Return a random 20-byte string" }, { "open_noatime", open_noatime, METH_VARARGS, "open() the given filename for read with O_NOATIME if possible" }, { "fadvise_done", fadvise_done, METH_VARARGS, "Inform the kernel that we're finished with earlier parts of a file" }, #ifdef BUP_HAVE_FILE_ATTRS { "get_linux_file_attr", bup_get_linux_file_attr, METH_VARARGS, "Return the Linux attributes for the given file." }, #endif #ifdef BUP_HAVE_FILE_ATTRS { "set_linux_file_attr", bup_set_linux_file_attr, METH_VARARGS, "Set the Linux attributes for the given file." }, #endif #ifdef HAVE_UTIMENSAT { "bup_utimensat", bup_utimensat, METH_VARARGS, "Change path timestamps with nanosecond precision (POSIX)." }, #endif #ifdef HAVE_UTIMES { "bup_utimes", bup_utimes, METH_VARARGS, "Change path timestamps with microsecond precision." }, #endif #ifdef HAVE_LUTIMES { "bup_lutimes", bup_lutimes, METH_VARARGS, "Change path timestamps with microsecond precision;" " don't follow symlinks." }, #endif { "stat", bup_stat, METH_VARARGS, "Extended version of stat." }, { "lstat", bup_lstat, METH_VARARGS, "Extended version of lstat." }, { "fstat", bup_fstat, METH_VARARGS, "Extended version of fstat." }, #ifdef HAVE_TM_TM_GMTOFF { "localtime", bup_localtime, METH_VARARGS, "Return struct_time elements plus the timezone offset and name." }, #endif #ifdef BUP_MINCORE_BUF_TYPE { "mincore", bup_mincore, METH_VARARGS, "For mincore(src, src_n, src_off, dest, dest_off)" " call the system mincore(src + src_off, src_n, &dest[dest_off])." }, #endif { NULL, NULL, 0, NULL }, // sentinel }; PyMODINIT_FUNC init_helpers(void) { // FIXME: migrate these tests to configure. Check against the // type we're going to use when passing to python. Other stat // types are tested at runtime. assert(sizeof(ino_t) <= sizeof(unsigned PY_LONG_LONG)); assert(sizeof(off_t) <= sizeof(PY_LONG_LONG)); assert(sizeof(blksize_t) <= sizeof(PY_LONG_LONG)); assert(sizeof(blkcnt_t) <= sizeof(PY_LONG_LONG)); // Just be sure (relevant when passing timestamps back to Python above). assert(sizeof(PY_LONG_LONG) <= sizeof(long long)); assert(sizeof(unsigned PY_LONG_LONG) <= sizeof(unsigned long long)); // Originally required by append_sparse_region() { off_t probe; if (!INTEGRAL_ASSIGNMENT_FITS(&probe, INT_MAX)) { fprintf(stderr, "off_t can't hold INT_MAX; please report.\n"); exit(1); } } char *e; PyObject *m = Py_InitModule("_helpers", helper_methods); if (m == NULL) return; #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wtautological-compare" // For INTEGER_TO_PY(). { PyObject *value; value = INTEGER_TO_PY(INT_MAX); PyObject_SetAttrString(m, "INT_MAX", value); Py_DECREF(value); value = INTEGER_TO_PY(UINT_MAX); PyObject_SetAttrString(m, "UINT_MAX", value); Py_DECREF(value); } #ifdef HAVE_UTIMENSAT { PyObject *value; value = INTEGER_TO_PY(AT_FDCWD); PyObject_SetAttrString(m, "AT_FDCWD", value); Py_DECREF(value); value = INTEGER_TO_PY(AT_SYMLINK_NOFOLLOW); PyObject_SetAttrString(m, "AT_SYMLINK_NOFOLLOW", value); Py_DECREF(value); value = INTEGER_TO_PY(UTIME_NOW); PyObject_SetAttrString(m, "UTIME_NOW", value); Py_DECREF(value); } #endif #ifdef BUP_HAVE_MINCORE_INCORE { PyObject *value; value = INTEGER_TO_PY(MINCORE_INCORE); PyObject_SetAttrString(m, "MINCORE_INCORE", value); Py_DECREF(value); } #endif #pragma clang diagnostic pop // ignored "-Wtautological-compare" e = getenv("BUP_FORCE_TTY"); istty2 = isatty(2) || (atoi(e ? e : "0") & 2); unpythonize_argv(); } bup-0.29/lib/bup/_release.py000066400000000000000000000002261303127641400157200ustar00rootroot00000000000000 # This will be automatically populated by git via export-subst. # cf. ./.gitattributes COMMIT='$Format:%H$' NAMES='$Format:%d$' DATE='$Format:%ci$' bup-0.29/lib/bup/bloom.py000066400000000000000000000253211303127641400152540ustar00rootroot00000000000000"""Discussion of bloom constants for bup: There are four basic things to consider when building a bloom filter: The size, in bits, of the filter The capacity, in entries, of the filter The probability of a false positive that is tolerable The number of bits readily available to use for addressing filter bits There is one major tunable that is not directly related to the above: k: the number of bits set in the filter per entry Here's a wall of numbers showing the relationship between k; the ratio between the filter size in bits and the entries in the filter; and pfalse_positive: mn|k=3 |k=4 |k=5 |k=6 |k=7 |k=8 |k=9 |k=10 |k=11 8|3.05794|2.39687|2.16792|2.15771|2.29297|2.54917|2.92244|3.41909|4.05091 9|2.27780|1.65770|1.40703|1.32721|1.34892|1.44631|1.61138|1.84491|2.15259 10|1.74106|1.18133|0.94309|0.84362|0.81937|0.84555|0.91270|1.01859|1.16495 11|1.36005|0.86373|0.65018|0.55222|0.51259|0.50864|0.53098|0.57616|0.64387 12|1.08231|0.64568|0.45945|0.37108|0.32939|0.31424|0.31695|0.33387|0.36380 13|0.87517|0.49210|0.33183|0.25527|0.21689|0.19897|0.19384|0.19804|0.21013 14|0.71759|0.38147|0.24433|0.17934|0.14601|0.12887|0.12127|0.12012|0.12399 15|0.59562|0.30019|0.18303|0.12840|0.10028|0.08523|0.07749|0.07440|0.07468 16|0.49977|0.23941|0.13925|0.09351|0.07015|0.05745|0.05049|0.04700|0.04587 17|0.42340|0.19323|0.10742|0.06916|0.04990|0.03941|0.03350|0.03024|0.02870 18|0.36181|0.15765|0.08392|0.05188|0.03604|0.02748|0.02260|0.01980|0.01827 19|0.31160|0.12989|0.06632|0.03942|0.02640|0.01945|0.01549|0.01317|0.01182 20|0.27026|0.10797|0.05296|0.03031|0.01959|0.01396|0.01077|0.00889|0.00777 21|0.23591|0.09048|0.04269|0.02356|0.01471|0.01014|0.00759|0.00609|0.00518 22|0.20714|0.07639|0.03473|0.01850|0.01117|0.00746|0.00542|0.00423|0.00350 23|0.18287|0.06493|0.02847|0.01466|0.00856|0.00555|0.00392|0.00297|0.00240 24|0.16224|0.05554|0.02352|0.01171|0.00663|0.00417|0.00286|0.00211|0.00166 25|0.14459|0.04779|0.01957|0.00944|0.00518|0.00316|0.00211|0.00152|0.00116 26|0.12942|0.04135|0.01639|0.00766|0.00408|0.00242|0.00157|0.00110|0.00082 27|0.11629|0.03595|0.01381|0.00626|0.00324|0.00187|0.00118|0.00081|0.00059 28|0.10489|0.03141|0.01170|0.00515|0.00259|0.00146|0.00090|0.00060|0.00043 29|0.09492|0.02756|0.00996|0.00426|0.00209|0.00114|0.00069|0.00045|0.00031 30|0.08618|0.02428|0.00853|0.00355|0.00169|0.00090|0.00053|0.00034|0.00023 31|0.07848|0.02147|0.00733|0.00297|0.00138|0.00072|0.00041|0.00025|0.00017 32|0.07167|0.01906|0.00633|0.00250|0.00113|0.00057|0.00032|0.00019|0.00013 Here's a table showing available repository size for a given pfalse_positive and three values of k (assuming we only use the 160 bit SHA1 for addressing the filter and 8192bytes per object): pfalse|obj k=4 |cap k=4 |obj k=5 |cap k=5 |obj k=6 |cap k=6 2.500%|139333497228|1038.11 TiB|558711157|4262.63 GiB|13815755|105.41 GiB 1.000%|104489450934| 778.50 TiB|436090254|3327.10 GiB|11077519| 84.51 GiB 0.125%| 57254889824| 426.58 TiB|261732190|1996.86 GiB| 7063017| 55.89 GiB This eliminates pretty neatly any k>6 as long as we use the raw SHA for addressing. filter size scales linearly with repository size for a given k and pfalse. Here's a table of filter sizes for a 1 TiB repository: pfalse| k=3 | k=4 | k=5 | k=6 2.500%| 138.78 MiB | 126.26 MiB | 123.00 MiB | 123.37 MiB 1.000%| 197.83 MiB | 168.36 MiB | 157.58 MiB | 153.87 MiB 0.125%| 421.14 MiB | 307.26 MiB | 262.56 MiB | 241.32 MiB For bup: * We want the bloom filter to fit in memory; if it doesn't, the k pagefaults per lookup will be worse than the two required for midx. * We want the pfalse_positive to be low enough that the cost of sometimes faulting on the midx doesn't overcome the benefit of the bloom filter. * We have readily available 160 bits for addressing the filter. * We want to be able to have a single bloom address entire repositories of reasonable size. Based on these parameters, a combination of k=4 and k=5 provides the behavior that bup needs. As such, I've implemented bloom addressing, adding and checking functions in C for these two values. Because k=5 requires less space and gives better overall pfalse_positive performance, it is preferred if a table with k=5 can represent the repository. None of this tells us what max_pfalse_positive to choose. Brandon Low 2011-02-04 """ import sys, os, math, mmap, struct from bup import _helpers from bup.helpers import (debug1, debug2, log, mmap_read, mmap_readwrite, mmap_readwrite_private, unlink) BLOOM_VERSION = 2 MAX_BITS_EACH = 32 # Kinda arbitrary, but 4 bytes per entry is pretty big MAX_BLOOM_BITS = {4: 37, 5: 29} # 160/k-log2(8) MAX_PFALSE_POSITIVE = 1. # Totally arbitrary, needs benchmarking _total_searches = 0 _total_steps = 0 bloom_contains = _helpers.bloom_contains bloom_add = _helpers.bloom_add # FIXME: check bloom create() and ShaBloom handling/ownership of "f". # The ownership semantics should be clarified since the caller needs # to know who is responsible for closing it. class ShaBloom: """Wrapper which contains data from multiple index files. """ def __init__(self, filename, f=None, readwrite=False, expected=-1): self.name = filename self.rwfile = None self.map = None assert(filename.endswith('.bloom')) if readwrite: assert(expected > 0) self.rwfile = f = f or open(filename, 'r+b') f.seek(0) # Decide if we want to mmap() the pages as writable ('immediate' # write) or else map them privately for later writing back to # the file ('delayed' write). A bloom table's write access # pattern is such that we dirty almost all the pages after adding # very few entries. But the table is so big that dirtying # *all* the pages often exceeds Linux's default # /proc/sys/vm/dirty_ratio or /proc/sys/vm/dirty_background_ratio, # thus causing it to start flushing the table before we're # finished... even though there's more than enough space to # store the bloom table in RAM. # # To work around that behaviour, if we calculate that we'll # probably end up touching the whole table anyway (at least # one bit flipped per memory page), let's use a "private" mmap, # which defeats Linux's ability to flush it to disk. Then we'll # flush it as one big lump during close(). pages = os.fstat(f.fileno()).st_size / 4096 * 5 # assume k=5 self.delaywrite = expected > pages debug1('bloom: delaywrite=%r\n' % self.delaywrite) if self.delaywrite: self.map = mmap_readwrite_private(self.rwfile, close=False) else: self.map = mmap_readwrite(self.rwfile, close=False) else: self.rwfile = None f = f or open(filename, 'rb') self.map = mmap_read(f) got = str(self.map[0:4]) if got != 'BLOM': log('Warning: invalid BLOM header (%r) in %r\n' % (got, filename)) return self._init_failed() ver = struct.unpack('!I', self.map[4:8])[0] if ver < BLOOM_VERSION: log('Warning: ignoring old-style (v%d) bloom %r\n' % (ver, filename)) return self._init_failed() if ver > BLOOM_VERSION: log('Warning: ignoring too-new (v%d) bloom %r\n' % (ver, filename)) return self._init_failed() self.bits, self.k, self.entries = struct.unpack('!HHI', self.map[8:16]) idxnamestr = str(self.map[16 + 2**self.bits:]) if idxnamestr: self.idxnames = idxnamestr.split('\0') else: self.idxnames = [] def _init_failed(self): if self.map: self.map = None if self.rwfile: self.rwfile.close() self.rwfile = None self.idxnames = [] self.bits = self.entries = 0 def valid(self): return self.map and self.bits def __del__(self): self.close() def close(self): if self.map and self.rwfile: debug2("bloom: closing with %d entries\n" % self.entries) self.map[12:16] = struct.pack('!I', self.entries) if self.delaywrite: self.rwfile.seek(0) self.rwfile.write(self.map) else: self.map.flush() self.rwfile.seek(16 + 2**self.bits) if self.idxnames: self.rwfile.write('\0'.join(self.idxnames)) self._init_failed() def pfalse_positive(self, additional=0): n = self.entries + additional m = 8*2**self.bits k = self.k return 100*(1-math.exp(-k*float(n)/m))**k def add(self, ids): """Add the hashes in ids (packed binary 20-bytes) to the filter.""" if not self.map: raise Exception("Cannot add to closed bloom") self.entries += bloom_add(self.map, ids, self.bits, self.k) def add_idx(self, ix): """Add the object to the filter.""" self.add(ix.shatable) self.idxnames.append(os.path.basename(ix.name)) def exists(self, sha): """Return nonempty if the object probably exists in the bloom filter. If this function returns false, the object definitely does not exist. If it returns true, there is a small probability that it exists anyway, so you'll have to check it some other way. """ global _total_searches, _total_steps _total_searches += 1 if not self.map: return None found, steps = bloom_contains(self.map, str(sha), self.bits, self.k) _total_steps += steps return found def __len__(self): return int(self.entries) def create(name, expected, delaywrite=None, f=None, k=None): """Create and return a bloom filter for `expected` entries.""" bits = int(math.floor(math.log(expected*MAX_BITS_EACH/8,2))) k = k or ((bits <= MAX_BLOOM_BITS[5]) and 5 or 4) if bits > MAX_BLOOM_BITS[k]: log('bloom: warning, max bits exceeded, non-optimal\n') bits = MAX_BLOOM_BITS[k] debug1('bloom: using 2^%d bytes and %d hash functions\n' % (bits, k)) f = f or open(name, 'w+b') f.write('BLOM') f.write(struct.pack('!IHHI', BLOOM_VERSION, bits, k, 0)) assert(f.tell() == 16) # NOTE: On some systems this will not extend+zerofill, but it does on # darwin, linux, bsd and solaris. f.truncate(16+2**bits) f.seek(0) if delaywrite != None and not delaywrite: # tell it to expect very few objects, forcing a direct mmap expected = 1 return ShaBloom(name, f=f, readwrite=True, expected=expected) def clear_bloom(dir): unlink(os.path.join(dir, 'bup.bloom')) bup-0.29/lib/bup/bupsplit.c000066400000000000000000000111461303127641400156000ustar00rootroot00000000000000/* * Copyright 2011 Avery Pennarun. All rights reserved. * * (This license applies to bupsplit.c and bupsplit.h only.) * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * THIS SOFTWARE IS PROVIDED BY AVERY PENNARUN ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #include "bupsplit.h" #include #include #include #include // According to librsync/rollsum.h: // "We should make this something other than zero to improve the // checksum algorithm: tridge suggests a prime number." // apenwarr: I unscientifically tried 0 and 7919, and they both ended up // slightly worse than the librsync value of 31 for my arbitrary test data. #define ROLLSUM_CHAR_OFFSET 31 typedef struct { unsigned s1, s2; uint8_t window[BUP_WINDOWSIZE]; int wofs; } Rollsum; // These formulas are based on rollsum.h in the librsync project. static void rollsum_add(Rollsum *r, uint8_t drop, uint8_t add) { r->s1 += add - drop; r->s2 += r->s1 - (BUP_WINDOWSIZE * (drop + ROLLSUM_CHAR_OFFSET)); } static void rollsum_init(Rollsum *r) { r->s1 = BUP_WINDOWSIZE * ROLLSUM_CHAR_OFFSET; r->s2 = BUP_WINDOWSIZE * (BUP_WINDOWSIZE-1) * ROLLSUM_CHAR_OFFSET; r->wofs = 0; memset(r->window, 0, BUP_WINDOWSIZE); } // For some reason, gcc 4.3 (at least) optimizes badly if find_ofs() // is static and rollsum_roll is an inline function. Let's use a macro // here instead to help out the optimizer. #define rollsum_roll(r, ch) do { \ rollsum_add((r), (r)->window[(r)->wofs], (ch)); \ (r)->window[(r)->wofs] = (ch); \ (r)->wofs = ((r)->wofs + 1) % BUP_WINDOWSIZE; \ } while (0) static uint32_t rollsum_digest(Rollsum *r) { return (r->s1 << 16) | (r->s2 & 0xffff); } static uint32_t rollsum_sum(uint8_t *buf, size_t ofs, size_t len) { size_t count; Rollsum r; rollsum_init(&r); for (count = ofs; count < len; count++) rollsum_roll(&r, buf[count]); return rollsum_digest(&r); } int bupsplit_find_ofs(const unsigned char *buf, int len, int *bits) { Rollsum r; int count; rollsum_init(&r); for (count = 0; count < len; count++) { rollsum_roll(&r, buf[count]); if ((r.s2 & (BUP_BLOBSIZE-1)) == ((~0) & (BUP_BLOBSIZE-1))) { if (bits) { unsigned rsum = rollsum_digest(&r); rsum >>= BUP_BLOBBITS; for (*bits = BUP_BLOBBITS; (rsum >>= 1) & 1; (*bits)++) ; } return count+1; } } return 0; } #ifndef BUP_NO_SELFTEST #define BUP_SELFTEST_SIZE 100000 int bupsplit_selftest() { uint8_t *buf = malloc(BUP_SELFTEST_SIZE); uint32_t sum1a, sum1b, sum2a, sum2b, sum3a, sum3b; unsigned count; srandom(1); for (count = 0; count < BUP_SELFTEST_SIZE; count++) buf[count] = random(); sum1a = rollsum_sum(buf, 0, BUP_SELFTEST_SIZE); sum1b = rollsum_sum(buf, 1, BUP_SELFTEST_SIZE); sum2a = rollsum_sum(buf, BUP_SELFTEST_SIZE - BUP_WINDOWSIZE*5/2, BUP_SELFTEST_SIZE - BUP_WINDOWSIZE); sum2b = rollsum_sum(buf, 0, BUP_SELFTEST_SIZE - BUP_WINDOWSIZE); sum3a = rollsum_sum(buf, 0, BUP_WINDOWSIZE+3); sum3b = rollsum_sum(buf, 3, BUP_WINDOWSIZE+3); fprintf(stderr, "sum1a = 0x%08x\n", sum1a); fprintf(stderr, "sum1b = 0x%08x\n", sum1b); fprintf(stderr, "sum2a = 0x%08x\n", sum2a); fprintf(stderr, "sum2b = 0x%08x\n", sum2b); fprintf(stderr, "sum3a = 0x%08x\n", sum3a); fprintf(stderr, "sum3b = 0x%08x\n", sum3b); free(buf); return sum1a!=sum1b || sum2a!=sum2b || sum3a!=sum3b; } #endif // !BUP_NO_SELFTEST bup-0.29/lib/bup/bupsplit.h000066400000000000000000000034371303127641400156110ustar00rootroot00000000000000/* * Copyright 2011 Avery Pennarun. All rights reserved. * * (This license applies to bupsplit.c and bupsplit.h only.) * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are * met: * * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * * THIS SOFTWARE IS PROVIDED BY AVERY PENNARUN ``AS IS'' AND ANY * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL OR * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF * LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. */ #ifndef __BUPSPLIT_H #define __BUPSPLIT_H #define BUP_BLOBBITS (13) #define BUP_BLOBSIZE (1<\[)?((?(sb)[0-9a-f:]+|[^:/]+))(?(sb)\])' port = r'(?::(\d+))?' path = r'(/.*)?' url_match = re.match( '%s(?:%s%s)?%s' % (protocol, host, port, path), remote, re.I) if url_match: if not url_match.group(1) in ('ssh', 'bup', 'file'): raise ClientError, 'unexpected protocol: %s' % url_match.group(1) return url_match.group(1,3,4,5) else: rs = remote.split(':', 1) if len(rs) == 1 or rs[0] in ('', '-'): return 'file', None, None, rs[-1] else: return 'ssh', rs[0], None, rs[1] class Client: def __init__(self, remote, create=False): self._busy = self.conn = None self.sock = self.p = self.pout = self.pin = None is_reverse = os.environ.get('BUP_SERVER_REVERSE') if is_reverse: assert(not remote) remote = '%s:' % is_reverse (self.protocol, self.host, self.port, self.dir) = parse_remote(remote) self.cachedir = git.repo('index-cache/%s' % re.sub(r'[^@\w]', '_', "%s:%s" % (self.host, self.dir))) if is_reverse: self.pout = os.fdopen(3, 'rb') self.pin = os.fdopen(4, 'wb') self.conn = Conn(self.pout, self.pin) else: if self.protocol in ('ssh', 'file'): try: # FIXME: ssh and file shouldn't use the same module self.p = ssh.connect(self.host, self.port, 'server') self.pout = self.p.stdout self.pin = self.p.stdin self.conn = Conn(self.pout, self.pin) except OSError as e: raise ClientError, 'connect: %s' % e, sys.exc_info()[2] elif self.protocol == 'bup': self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) self.sock.connect((self.host, atoi(self.port) or 1982)) self.sockw = self.sock.makefile('wb') self.conn = DemuxConn(self.sock.fileno(), self.sockw) if self.dir: self.dir = re.sub(r'[\r\n]', ' ', self.dir) if create: self.conn.write('init-dir %s\n' % self.dir) else: self.conn.write('set-dir %s\n' % self.dir) self.check_ok() self.sync_indexes() def __del__(self): try: self.close() except IOError as e: if e.errno == errno.EPIPE: pass else: raise def close(self): if self.conn and not self._busy: self.conn.write('quit\n') if self.pin: self.pin.close() if self.sock and self.sockw: self.sockw.close() self.sock.shutdown(socket.SHUT_WR) if self.conn: self.conn.close() if self.pout: self.pout.close() if self.sock: self.sock.close() if self.p: self.p.wait() rv = self.p.wait() if rv: raise ClientError('server tunnel returned exit code %d' % rv) self.conn = None self.sock = self.p = self.pin = self.pout = None def check_ok(self): if self.p: rv = self.p.poll() if rv != None: raise ClientError('server exited unexpectedly with code %r' % rv) try: return self.conn.check_ok() except Exception as e: raise ClientError, e, sys.exc_info()[2] def check_busy(self): if self._busy: raise ClientError('already busy with command %r' % self._busy) def ensure_busy(self): if not self._busy: raise ClientError('expected to be busy, but not busy?!') def _not_busy(self): self._busy = None def sync_indexes(self): self.check_busy() conn = self.conn mkdirp(self.cachedir) # All cached idxs are extra until proven otherwise extra = set() for f in os.listdir(self.cachedir): debug1('%s\n' % f) if f.endswith('.idx'): extra.add(f) needed = set() conn.write('list-indexes\n') for line in linereader(conn): if not line: break assert(line.find('/') < 0) parts = line.split(' ') idx = parts[0] if len(parts) == 2 and parts[1] == 'load' and idx not in extra: # If the server requests that we load an idx and we don't # already have a copy of it, it is needed needed.add(idx) # Any idx that the server has heard of is proven not extra extra.discard(idx) self.check_ok() debug1('client: removing extra indexes: %s\n' % extra) for idx in extra: os.unlink(os.path.join(self.cachedir, idx)) debug1('client: server requested load of: %s\n' % needed) for idx in needed: self.sync_index(idx) git.auto_midx(self.cachedir) def sync_index(self, name): #debug1('requesting %r\n' % name) self.check_busy() mkdirp(self.cachedir) fn = os.path.join(self.cachedir, name) if os.path.exists(fn): msg = "won't request existing .idx, try `bup bloom --check %s`" % fn raise ClientError(msg) self.conn.write('send-index %s\n' % name) n = struct.unpack('!I', self.conn.read(4))[0] assert(n) with atomically_replaced_file(fn, 'w') as f: count = 0 progress('Receiving index from server: %d/%d\r' % (count, n)) for b in chunkyreader(self.conn, n): f.write(b) count += len(b) qprogress('Receiving index from server: %d/%d\r' % (count, n)) progress('Receiving index from server: %d/%d, done.\n' % (count, n)) self.check_ok() def _make_objcache(self): return git.PackIdxList(self.cachedir) def _suggest_packs(self): ob = self._busy if ob: assert(ob == 'receive-objects-v2') self.conn.write('\xff\xff\xff\xff') # suspend receive-objects-v2 suggested = [] for line in linereader(self.conn): if not line: break debug2('%s\n' % line) if line.startswith('index '): idx = line[6:] debug1('client: received index suggestion: %s\n' % git.shorten_hash(idx)) suggested.append(idx) else: assert(line.endswith('.idx')) debug1('client: completed writing pack, idx: %s\n' % git.shorten_hash(line)) suggested.append(line) self.check_ok() if ob: self._busy = None idx = None for idx in suggested: self.sync_index(idx) git.auto_midx(self.cachedir) if ob: self._busy = ob self.conn.write('%s\n' % ob) return idx def new_packwriter(self, compression_level = 1): self.check_busy() def _set_busy(): self._busy = 'receive-objects-v2' self.conn.write('receive-objects-v2\n') return PackWriter_Remote(self.conn, objcache_maker = self._make_objcache, suggest_packs = self._suggest_packs, onopen = _set_busy, onclose = self._not_busy, ensure_busy = self.ensure_busy, compression_level = compression_level) def read_ref(self, refname): self.check_busy() self.conn.write('read-ref %s\n' % refname) r = self.conn.readline().strip() self.check_ok() if r: assert(len(r) == 40) # hexified sha return r.decode('hex') else: return None # nonexistent ref def update_ref(self, refname, newval, oldval): self.check_busy() self.conn.write('update-ref %s\n%s\n%s\n' % (refname, newval.encode('hex'), (oldval or '').encode('hex'))) self.check_ok() def cat(self, id): self.check_busy() self._busy = 'cat' self.conn.write('cat %s\n' % re.sub(r'[\n\r]', '_', id)) while 1: sz = struct.unpack('!I', self.conn.read(4))[0] if not sz: break yield self.conn.read(sz) e = self.check_ok() self._not_busy() if e: raise KeyError(str(e)) class PackWriter_Remote(git.PackWriter): def __init__(self, conn, objcache_maker, suggest_packs, onopen, onclose, ensure_busy, compression_level=1): git.PackWriter.__init__(self, objcache_maker) self.file = conn self.filename = 'remote socket' self.suggest_packs = suggest_packs self.onopen = onopen self.onclose = onclose self.ensure_busy = ensure_busy self._packopen = False self._bwcount = 0 self._bwtime = time.time() def _open(self): if not self._packopen: self.onopen() self._packopen = True def _end(self, run_midx=True): assert(run_midx) # We don't support this via remote yet if self._packopen and self.file: self.file.write('\0\0\0\0') self._packopen = False self.onclose() # Unbusy self.objcache = None return self.suggest_packs() # Returns last idx received def close(self): id = self._end() self.file = None return id def abort(self): raise ClientError("don't know how to abort remote pack writing") def _raw_write(self, datalist, sha): assert(self.file) if not self._packopen: self._open() self.ensure_busy() data = ''.join(datalist) assert(data) assert(sha) crc = zlib.crc32(data) & 0xffffffff outbuf = ''.join((struct.pack('!I', len(data) + 20 + 4), sha, struct.pack('!I', crc), data)) try: (self._bwcount, self._bwtime) = _raw_write_bwlimit( self.file, outbuf, self._bwcount, self._bwtime) except IOError as e: raise ClientError, e, sys.exc_info()[2] self.outbytes += len(data) self.count += 1 if self.file.has_input(): self.suggest_packs() self.objcache.refresh() return sha, crc bup-0.29/lib/bup/csetup.py000066400000000000000000000005151303127641400154450ustar00rootroot00000000000000from distutils.core import setup, Extension _helpers_mod = Extension('_helpers', sources=['_helpers.c', 'bupsplit.c'], depends=['../../config/config.h']) setup(name='_helpers', version='0.1', description='accelerator library for bup', ext_modules=[_helpers_mod]) bup-0.29/lib/bup/drecurse.py000066400000000000000000000105251303127641400157600ustar00rootroot00000000000000 import stat, os from bup.helpers import add_error, should_rx_exclude_path, debug1, resolve_parent import bup.xstat as xstat try: O_LARGEFILE = os.O_LARGEFILE except AttributeError: O_LARGEFILE = 0 try: O_NOFOLLOW = os.O_NOFOLLOW except AttributeError: O_NOFOLLOW = 0 # the use of fchdir() and lstat() is for two reasons: # - help out the kernel by not making it repeatedly look up the absolute path # - avoid race conditions caused by doing listdir() on a changing symlink class OsFile: def __init__(self, path): self.fd = None self.fd = os.open(path, os.O_RDONLY|O_LARGEFILE|O_NOFOLLOW|os.O_NDELAY) def __del__(self): if self.fd: fd = self.fd self.fd = None os.close(fd) def fchdir(self): os.fchdir(self.fd) def stat(self): return xstat.fstat(self.fd) _IFMT = stat.S_IFMT(0xffffffff) # avoid function call in inner loop def _dirlist(): l = [] for n in os.listdir('.'): try: st = xstat.lstat(n) except OSError as e: add_error(Exception('%s: %s' % (resolve_parent(n), str(e)))) continue if (st.st_mode & _IFMT) == stat.S_IFDIR: n += '/' l.append((n,st)) l.sort(reverse=True) return l def _recursive_dirlist(prepend, xdev, bup_dir=None, excluded_paths=None, exclude_rxs=None, xdev_exceptions=frozenset()): for (name,pst) in _dirlist(): path = prepend + name if excluded_paths: if os.path.normpath(path) in excluded_paths: debug1('Skipping %r: excluded.\n' % path) continue if exclude_rxs and should_rx_exclude_path(path, exclude_rxs): continue if name.endswith('/'): if bup_dir != None: if os.path.normpath(path) == bup_dir: debug1('Skipping BUP_DIR.\n') continue if xdev != None and pst.st_dev != xdev \ and path not in xdev_exceptions: debug1('Skipping contents of %r: different filesystem.\n' % path) else: try: OsFile(name).fchdir() except OSError as e: add_error('%s: %s' % (prepend, e)) else: for i in _recursive_dirlist(prepend=prepend+name, xdev=xdev, bup_dir=bup_dir, excluded_paths=excluded_paths, exclude_rxs=exclude_rxs, xdev_exceptions=xdev_exceptions): yield i os.chdir('..') yield (path, pst) def recursive_dirlist(paths, xdev, bup_dir=None, excluded_paths=None, exclude_rxs=None, xdev_exceptions=frozenset()): startdir = OsFile('.') try: assert(type(paths) != type('')) for path in paths: try: pst = xstat.lstat(path) if stat.S_ISLNK(pst.st_mode): yield (path, pst) continue except OSError as e: add_error('recursive_dirlist: %s' % e) continue try: pfile = OsFile(path) except OSError as e: add_error(e) continue pst = pfile.stat() if xdev: xdev = pst.st_dev else: xdev = None if stat.S_ISDIR(pst.st_mode): pfile.fchdir() prepend = os.path.join(path, '') for i in _recursive_dirlist(prepend=prepend, xdev=xdev, bup_dir=bup_dir, excluded_paths=excluded_paths, exclude_rxs=exclude_rxs, xdev_exceptions=xdev_exceptions): yield i startdir.fchdir() else: prepend = path yield (prepend,pst) except: try: startdir.fchdir() except: pass raise bup-0.29/lib/bup/gc.py000066400000000000000000000234431303127641400145400ustar00rootroot00000000000000import glob, os, subprocess, sys, tempfile from bup import bloom, git, midx from bup.git import MissingObject, walk_object from bup.helpers import Nonlocal, log, progress, qprogress from os.path import basename # This garbage collector uses a Bloom filter to track the live objects # during the mark phase. This means that the collection is # probabilistic; it may retain some (known) percentage of garbage, but # it can also work within a reasonable, fixed RAM budget for any # particular percentage and repository size. # # The collection proceeds as follows: # # - Scan all live objects by walking all of the refs, and insert # every hash encountered into a new Bloom "liveness" filter. # Compute the size of the liveness filter based on the total # number of objects in the repository. This is the "mark phase". # # - Clear the data that's dependent on the repository's object set, # i.e. the reflog, the normal Bloom filter, and the midxes. # # - Traverse all of the pack files, consulting the liveness filter # to decide which objects to keep. # # For each pack file, rewrite it iff it probably contains more # than (currently) 10% garbage (computed by an initial traversal # of the packfile in consultation with the liveness filter). To # rewrite, traverse the packfile (again) and write each hash that # tests positive against the liveness filter to a packwriter. # # During the traversal of all of the packfiles, delete redundant, # old packfiles only after the packwriter has finished the pack # that contains all of their live objects. # # The current code unconditionally tracks the set of tree hashes seen # during the mark phase, and skips any that have already been visited. # This should decrease the IO load at the cost of increased RAM use. # FIXME: add a bloom filter tuning parameter? def count_objects(dir, verbosity): # For now we'll just use open_idx(), but we could probably be much # more efficient since all we need is a single integer (the last # fanout entry) from each index. object_count = 0 indexes = glob.glob(os.path.join(dir, '*.idx')) for i, idx_name in enumerate(indexes): if verbosity: log('found %d objects (%d/%d %s)\r' % (object_count, i + 1, len(indexes), basename(idx_name))) idx = git.open_idx(idx_name) object_count += len(idx) return object_count def report_live_item(n, total, ref_name, ref_id, item, verbosity): status = 'scanned %02.2f%%' % (n * 100.0 / total) hex_id = ref_id.encode('hex') dirslash = '/' if item.type == 'tree' else '' chunk_path = item.chunk_path if chunk_path: if verbosity < 4: return ps = '/'.join(item.path) chunk_ps = '/'.join(chunk_path) log('%s %s:%s/%s%s\n' % (status, hex_id, ps, chunk_ps, dirslash)) return # Top commit, for example has none. demangled = git.demangle_name(item.path[-1], item.mode)[0] if item.path \ else None # Don't print mangled paths unless the verbosity is over 3. if demangled: ps = '/'.join(item.path[:-1] + [demangled]) if verbosity == 1: qprogress('%s %s:%s%s\r' % (status, hex_id, ps, dirslash)) elif (verbosity > 1 and item.type == 'tree') \ or (verbosity > 2 and item.type == 'blob'): log('%s %s:%s%s\n' % (status, hex_id, ps, dirslash)) elif verbosity > 3: ps = '/'.join(item.path) log('%s %s:%s%s\n' % (status, hex_id, ps, dirslash)) def find_live_objects(existing_count, cat_pipe, verbosity=0): prune_visited_trees = True # In case we want a command line option later pack_dir = git.repo('objects/pack') ffd, bloom_filename = tempfile.mkstemp('.bloom', 'tmp-gc-', pack_dir) os.close(ffd) # FIXME: allow selection of k? # FIXME: support ephemeral bloom filters (i.e. *never* written to disk) live_objs = bloom.create(bloom_filename, expected=existing_count, k=None) # live_objs will hold on to the fd until close or exit os.unlink(bloom_filename) stop_at, trees_visited = None, None if prune_visited_trees: trees_visited = set() stop_at = lambda (x): x.decode('hex') in trees_visited approx_live_count = 0 for ref_name, ref_id in git.list_refs(): for item in walk_object(cat_pipe, ref_id.encode('hex'), stop_at=stop_at, include_data=None): # FIXME: batch ids if verbosity: report_live_item(approx_live_count, existing_count, ref_name, ref_id, item, verbosity) bin_id = item.id.decode('hex') if trees_visited is not None and item.type == 'tree': trees_visited.add(bin_id) if verbosity: if not live_objs.exists(bin_id): live_objs.add(bin_id) approx_live_count += 1 else: live_objs.add(bin_id) trees_visited = None if verbosity: log('expecting to retain about %.2f%% unnecessary objects\n' % live_objs.pfalse_positive()) return live_objs def sweep(live_objects, existing_count, cat_pipe, threshold, compression, verbosity): # Traverse all the packs, saving the (probably) live data. ns = Nonlocal() ns.stale_files = [] def remove_stale_files(new_pack_prefix): if verbosity and new_pack_prefix: log('created ' + basename(new_pack_prefix) + '\n') for p in ns.stale_files: if verbosity: log('removing ' + basename(p) + '\n') os.unlink(p) if ns.stale_files: # So git cat-pipe will close them cat_pipe.restart() ns.stale_files = [] writer = git.PackWriter(objcache_maker=None, compression_level=compression, run_midx=False, on_pack_finish=remove_stale_files) # FIXME: sanity check .idx names vs .pack names? collect_count = 0 for idx_name in glob.glob(os.path.join(git.repo('objects/pack'), '*.idx')): if verbosity: qprogress('preserving live data (%d%% complete)\r' % ((float(collect_count) / existing_count) * 100)) idx = git.open_idx(idx_name) idx_live_count = 0 for i in xrange(0, len(idx)): sha = idx.shatable[i * 20 : (i + 1) * 20] if live_objects.exists(sha): idx_live_count += 1 collect_count += idx_live_count if idx_live_count == 0: if verbosity: log('deleting %s\n' % git.repo_rel(basename(idx_name))) ns.stale_files.append(idx_name) ns.stale_files.append(idx_name[:-3] + 'pack') continue live_frac = idx_live_count / float(len(idx)) if live_frac > ((100 - threshold) / 100.0): if verbosity: log('keeping %s (%d%% live)\n' % (git.repo_rel(basename(idx_name)), live_frac * 100)) continue if verbosity: log('rewriting %s (%.2f%% live)\n' % (basename(idx_name), live_frac * 100)) for i in xrange(0, len(idx)): sha = idx.shatable[i * 20 : (i + 1) * 20] if live_objects.exists(sha): item_it = cat_pipe.get(sha.encode('hex')) type = item_it.next() writer.just_write(sha, type, ''.join(item_it)) ns.stale_files.append(idx_name) ns.stale_files.append(idx_name[:-3] + 'pack') if verbosity: progress('preserving live data (%d%% complete)\n' % ((float(collect_count) / existing_count) * 100)) # Nothing should have recreated midx/bloom yet. pack_dir = git.repo('objects/pack') assert(not os.path.exists(os.path.join(pack_dir, 'bup.bloom'))) assert(not glob.glob(os.path.join(pack_dir, '*.midx'))) # try/catch should call writer.abort()? # This will finally run midx. writer.close() # Can only change refs (if needed) after this. remove_stale_files(None) # In case we didn't write to the writer. if verbosity: log('discarded %d%% of objects\n' % ((existing_count - count_objects(pack_dir, verbosity)) / float(existing_count) * 100)) def bup_gc(threshold=10, compression=1, verbosity=0): cat_pipe = git.cp() existing_count = count_objects(git.repo('objects/pack'), verbosity) if verbosity: log('found %d objects\n' % existing_count) if not existing_count: if verbosity: log('nothing to collect\n') else: try: live_objects = find_live_objects(existing_count, cat_pipe, verbosity=verbosity) except MissingObject as ex: log('bup: missing object %r \n' % ex.id.encode('hex')) sys.exit(1) try: # FIXME: just rename midxes and bloom, and restore them at the end if # we didn't change any packs? if verbosity: log('clearing midx files\n') midx.clear_midxes() if verbosity: log('clearing bloom filter\n') bloom.clear_bloom(git.repo('objects/pack')) if verbosity: log('clearing reflog\n') expirelog_cmd = ['git', 'reflog', 'expire', '--all', '--expire=all'] expirelog = subprocess.Popen(expirelog_cmd, preexec_fn = git._gitenv()) git._git_wait(' '.join(expirelog_cmd), expirelog) if verbosity: log('removing unreachable data\n') sweep(live_objects, existing_count, cat_pipe, threshold, compression, verbosity) finally: live_objects.close() bup-0.29/lib/bup/git.py000066400000000000000000001305051303127641400147300ustar00rootroot00000000000000"""Git interaction library. bup repositories are in Git format. This library allows us to interact with the Git data structures. """ import errno, os, sys, zlib, time, subprocess, struct, stat, re, tempfile, glob from collections import namedtuple from itertools import islice from bup import _helpers, hashsplit, path, midx, bloom, xstat from bup.helpers import (Sha1, add_error, chunkyreader, debug1, debug2, fdatasync, hostname, localtime, log, merge_iter, mmap_read, mmap_readwrite, progress, qprogress, stat_if_exists, unlink, username, userfullname, utc_offset_str) max_pack_size = 1000*1000*1000 # larger packs will slow down pruning max_pack_objects = 200*1000 # cache memory usage is about 83 bytes per object verbose = 0 ignore_midx = 0 repodir = None # The default repository, once initialized _typemap = { 'blob':3, 'tree':2, 'commit':1, 'tag':4 } _typermap = { 3:'blob', 2:'tree', 1:'commit', 4:'tag' } _total_searches = 0 _total_steps = 0 class GitError(Exception): pass def parse_tz_offset(s): """UTC offset in seconds.""" tz_off = (int(s[1:3]) * 60 * 60) + (int(s[3:5]) * 60) if s[0] == '-': return - tz_off return tz_off # FIXME: derived from http://git.rsbx.net/Documents/Git_Data_Formats.txt # Make sure that's authoritative. _start_end_char = r'[^ .,:;<>"\'\0\n]' _content_char = r'[^\0\n<>]' _safe_str_rx = '(?:%s{1,2}|(?:%s%s*%s))' \ % (_start_end_char, _start_end_char, _content_char, _start_end_char) _tz_rx = r'[-+]\d\d[0-5]\d' _parent_rx = r'(?:parent [abcdefABCDEF0123456789]{40}\n)' _commit_rx = re.compile(r'''tree (?P[abcdefABCDEF0123456789]{40}) (?P%s*)author (?P%s) <(?P%s)> (?P\d+) (?P%s) committer (?P%s) <(?P%s)> (?P\d+) (?P%s) (?P(?:.|\n)*)''' % (_parent_rx, _safe_str_rx, _safe_str_rx, _tz_rx, _safe_str_rx, _safe_str_rx, _tz_rx)) _parent_hash_rx = re.compile(r'\s*parent ([abcdefABCDEF0123456789]{40})\s*') # Note that the author_sec and committer_sec values are (UTC) epoch seconds. CommitInfo = namedtuple('CommitInfo', ['tree', 'parents', 'author_name', 'author_mail', 'author_sec', 'author_offset', 'committer_name', 'committer_mail', 'committer_sec', 'committer_offset', 'message']) def parse_commit(content): commit_match = re.match(_commit_rx, content) if not commit_match: raise Exception('cannot parse commit %r' % content) matches = commit_match.groupdict() return CommitInfo(tree=matches['tree'], parents=re.findall(_parent_hash_rx, matches['parents']), author_name=matches['author_name'], author_mail=matches['author_mail'], author_sec=int(matches['asec']), author_offset=parse_tz_offset(matches['atz']), committer_name=matches['committer_name'], committer_mail=matches['committer_mail'], committer_sec=int(matches['csec']), committer_offset=parse_tz_offset(matches['ctz']), message=matches['message']) def get_commit_items(id, cp): commit_it = cp.get(id) assert(commit_it.next() == 'commit') commit_content = ''.join(commit_it) return parse_commit(commit_content) def _local_git_date_str(epoch_sec): return '%d %s' % (epoch_sec, utc_offset_str(epoch_sec)) def _git_date_str(epoch_sec, tz_offset_sec): offs = tz_offset_sec // 60 return '%d %s%02d%02d' \ % (epoch_sec, '+' if offs >= 0 else '-', abs(offs) // 60, abs(offs) % 60) def repo(sub = '', repo_dir=None): """Get the path to the git repository or one of its subdirectories.""" global repodir repo_dir = repo_dir or repodir if not repo_dir: raise GitError('You should call check_repo_or_die()') # If there's a .git subdirectory, then the actual repo is in there. gd = os.path.join(repo_dir, '.git') if os.path.exists(gd): repodir = gd return os.path.join(repo_dir, sub) def shorten_hash(s): return re.sub(r'([^0-9a-z]|\b)([0-9a-z]{7})[0-9a-z]{33}([^0-9a-z]|\b)', r'\1\2*\3', s) def repo_rel(path): full = os.path.abspath(path) fullrepo = os.path.abspath(repo('')) if not fullrepo.endswith('/'): fullrepo += '/' if full.startswith(fullrepo): path = full[len(fullrepo):] if path.startswith('index-cache/'): path = path[len('index-cache/'):] return shorten_hash(path) def all_packdirs(): paths = [repo('objects/pack')] paths += glob.glob(repo('index-cache/*/.')) return paths def auto_midx(objdir): args = [path.exe(), 'midx', '--auto', '--dir', objdir] try: rv = subprocess.call(args, stdout=open('/dev/null', 'w')) except OSError as e: # make sure 'args' gets printed to help with debugging add_error('%r: exception: %s' % (args, e)) raise if rv: add_error('%r: returned %d' % (args, rv)) args = [path.exe(), 'bloom', '--dir', objdir] try: rv = subprocess.call(args, stdout=open('/dev/null', 'w')) except OSError as e: # make sure 'args' gets printed to help with debugging add_error('%r: exception: %s' % (args, e)) raise if rv: add_error('%r: returned %d' % (args, rv)) def mangle_name(name, mode, gitmode): """Mangle a file name to present an abstract name for segmented files. Mangled file names will have the ".bup" extension added to them. If a file's name already ends with ".bup", a ".bupl" extension is added to disambiguate normal files from segmented ones. """ if stat.S_ISREG(mode) and not stat.S_ISREG(gitmode): assert(stat.S_ISDIR(gitmode)) return name + '.bup' elif name.endswith('.bup') or name[:-1].endswith('.bup'): return name + '.bupl' else: return name (BUP_NORMAL, BUP_CHUNKED) = (0,1) def demangle_name(name, mode): """Remove name mangling from a file name, if necessary. The return value is a tuple (demangled_filename,mode), where mode is one of the following: * BUP_NORMAL : files that should be read as-is from the repository * BUP_CHUNKED : files that were chunked and need to be reassembled For more information on the name mangling algorithm, see mangle_name() """ if name.endswith('.bupl'): return (name[:-5], BUP_NORMAL) elif name.endswith('.bup'): return (name[:-4], BUP_CHUNKED) elif name.endswith('.bupm'): return (name[:-5], BUP_CHUNKED if stat.S_ISDIR(mode) else BUP_NORMAL) else: return (name, BUP_NORMAL) def calc_hash(type, content): """Calculate some content's hash in the Git fashion.""" header = '%s %d\0' % (type, len(content)) sum = Sha1(header) sum.update(content) return sum.digest() def shalist_item_sort_key(ent): (mode, name, id) = ent assert(mode+0 == mode) if stat.S_ISDIR(mode): return name + '/' else: return name def tree_encode(shalist): """Generate a git tree object from (mode,name,hash) tuples.""" shalist = sorted(shalist, key = shalist_item_sort_key) l = [] for (mode,name,bin) in shalist: assert(mode) assert(mode+0 == mode) assert(name) assert(len(bin) == 20) s = '%o %s\0%s' % (mode,name,bin) assert(s[0] != '0') # 0-padded octal is not acceptable in a git tree l.append(s) return ''.join(l) def tree_decode(buf): """Generate a list of (mode,name,hash) from the git tree object in buf.""" ofs = 0 while ofs < len(buf): z = buf.find('\0', ofs) assert(z > ofs) spl = buf[ofs:z].split(' ', 1) assert(len(spl) == 2) mode,name = spl sha = buf[z+1:z+1+20] ofs = z+1+20 yield (int(mode, 8), name, sha) def _encode_packobj(type, content, compression_level=1): if compression_level not in (0, 1, 2, 3, 4, 5, 6, 7, 8, 9): raise ValueError('invalid compression level %s' % compression_level) szout = '' sz = len(content) szbits = (sz & 0x0f) | (_typemap[type]<<4) sz >>= 4 while 1: if sz: szbits |= 0x80 szout += chr(szbits) if not sz: break szbits = sz & 0x7f sz >>= 7 z = zlib.compressobj(compression_level) yield szout yield z.compress(content) yield z.flush() def _encode_looseobj(type, content, compression_level=1): z = zlib.compressobj(compression_level) yield z.compress('%s %d\0' % (type, len(content))) yield z.compress(content) yield z.flush() def _decode_looseobj(buf): assert(buf); s = zlib.decompress(buf) i = s.find('\0') assert(i > 0) l = s[:i].split(' ') type = l[0] sz = int(l[1]) content = s[i+1:] assert(type in _typemap) assert(sz == len(content)) return (type, content) def _decode_packobj(buf): assert(buf) c = ord(buf[0]) type = _typermap[(c & 0x70) >> 4] sz = c & 0x0f shift = 4 i = 0 while c & 0x80: i += 1 c = ord(buf[i]) sz |= (c & 0x7f) << shift shift += 7 if not (c & 0x80): break return (type, zlib.decompress(buf[i+1:])) class PackIdx: def __init__(self): assert(0) def find_offset(self, hash): """Get the offset of an object inside the index file.""" idx = self._idx_from_hash(hash) if idx != None: return self._ofs_from_idx(idx) return None def exists(self, hash, want_source=False): """Return nonempty if the object exists in this index.""" if hash and (self._idx_from_hash(hash) != None): return want_source and os.path.basename(self.name) or True return None def __len__(self): return int(self.fanout[255]) def _idx_from_hash(self, hash): global _total_searches, _total_steps _total_searches += 1 assert(len(hash) == 20) b1 = ord(hash[0]) start = self.fanout[b1-1] # range -1..254 end = self.fanout[b1] # range 0..255 want = str(hash) _total_steps += 1 # lookup table is a step while start < end: _total_steps += 1 mid = start + (end-start)/2 v = self._idx_to_hash(mid) if v < want: start = mid+1 elif v > want: end = mid else: # got it! return mid return None class PackIdxV1(PackIdx): """Object representation of a Git pack index (version 1) file.""" def __init__(self, filename, f): self.name = filename self.idxnames = [self.name] self.map = mmap_read(f) self.fanout = list(struct.unpack('!256I', str(buffer(self.map, 0, 256*4)))) self.fanout.append(0) # entry "-1" nsha = self.fanout[255] self.sha_ofs = 256*4 self.shatable = buffer(self.map, self.sha_ofs, nsha*24) def _ofs_from_idx(self, idx): return struct.unpack('!I', str(self.shatable[idx*24 : idx*24+4]))[0] def _idx_to_hash(self, idx): return str(self.shatable[idx*24+4 : idx*24+24]) def __iter__(self): for i in xrange(self.fanout[255]): yield buffer(self.map, 256*4 + 24*i + 4, 20) class PackIdxV2(PackIdx): """Object representation of a Git pack index (version 2) file.""" def __init__(self, filename, f): self.name = filename self.idxnames = [self.name] self.map = mmap_read(f) assert(str(self.map[0:8]) == '\377tOc\0\0\0\2') self.fanout = list(struct.unpack('!256I', str(buffer(self.map, 8, 256*4)))) self.fanout.append(0) # entry "-1" nsha = self.fanout[255] self.sha_ofs = 8 + 256*4 self.shatable = buffer(self.map, self.sha_ofs, nsha*20) self.ofstable = buffer(self.map, self.sha_ofs + nsha*20 + nsha*4, nsha*4) self.ofs64table = buffer(self.map, 8 + 256*4 + nsha*20 + nsha*4 + nsha*4) def _ofs_from_idx(self, idx): ofs = struct.unpack('!I', str(buffer(self.ofstable, idx*4, 4)))[0] if ofs & 0x80000000: idx64 = ofs & 0x7fffffff ofs = struct.unpack('!Q', str(buffer(self.ofs64table, idx64*8, 8)))[0] return ofs def _idx_to_hash(self, idx): return str(self.shatable[idx*20:(idx+1)*20]) def __iter__(self): for i in xrange(self.fanout[255]): yield buffer(self.map, 8 + 256*4 + 20*i, 20) _mpi_count = 0 class PackIdxList: def __init__(self, dir): global _mpi_count assert(_mpi_count == 0) # these things suck tons of VM; don't waste it _mpi_count += 1 self.dir = dir self.also = set() self.packs = [] self.do_bloom = False self.bloom = None self.refresh() def __del__(self): global _mpi_count _mpi_count -= 1 assert(_mpi_count == 0) def __iter__(self): return iter(idxmerge(self.packs)) def __len__(self): return sum(len(pack) for pack in self.packs) def exists(self, hash, want_source=False): """Return nonempty if the object exists in the index files.""" global _total_searches _total_searches += 1 if hash in self.also: return True if self.do_bloom and self.bloom: if self.bloom.exists(hash): self.do_bloom = False else: _total_searches -= 1 # was counted by bloom return None for i in xrange(len(self.packs)): p = self.packs[i] _total_searches -= 1 # will be incremented by sub-pack ix = p.exists(hash, want_source=want_source) if ix: # reorder so most recently used packs are searched first self.packs = [p] + self.packs[:i] + self.packs[i+1:] return ix self.do_bloom = True return None def refresh(self, skip_midx = False): """Refresh the index list. This method verifies if .midx files were superseded (e.g. all of its contents are in another, bigger .midx file) and removes the superseded files. If skip_midx is True, all work on .midx files will be skipped and .midx files will be removed from the list. The module-global variable 'ignore_midx' can force this function to always act as if skip_midx was True. """ self.bloom = None # Always reopen the bloom as it may have been relaced self.do_bloom = False skip_midx = skip_midx or ignore_midx d = dict((p.name, p) for p in self.packs if not skip_midx or not isinstance(p, midx.PackMidx)) if os.path.exists(self.dir): if not skip_midx: midxl = [] for ix in self.packs: if isinstance(ix, midx.PackMidx): for name in ix.idxnames: d[os.path.join(self.dir, name)] = ix for full in glob.glob(os.path.join(self.dir,'*.midx')): if not d.get(full): mx = midx.PackMidx(full) (mxd, mxf) = os.path.split(mx.name) broken = False for n in mx.idxnames: if not os.path.exists(os.path.join(mxd, n)): log(('warning: index %s missing\n' + ' used by %s\n') % (n, mxf)) broken = True if broken: mx.close() del mx unlink(full) else: midxl.append(mx) midxl.sort(key=lambda ix: (-len(ix), -xstat.stat(ix.name).st_mtime)) for ix in midxl: any_needed = False for sub in ix.idxnames: found = d.get(os.path.join(self.dir, sub)) if not found or isinstance(found, PackIdx): # doesn't exist, or exists but not in a midx any_needed = True break if any_needed: d[ix.name] = ix for name in ix.idxnames: d[os.path.join(self.dir, name)] = ix elif not ix.force_keep: debug1('midx: removing redundant: %s\n' % os.path.basename(ix.name)) ix.close() unlink(ix.name) for full in glob.glob(os.path.join(self.dir,'*.idx')): if not d.get(full): try: ix = open_idx(full) except GitError as e: add_error(e) continue d[full] = ix bfull = os.path.join(self.dir, 'bup.bloom') if self.bloom is None and os.path.exists(bfull): self.bloom = bloom.ShaBloom(bfull) self.packs = list(set(d.values())) self.packs.sort(lambda x,y: -cmp(len(x),len(y))) if self.bloom and self.bloom.valid() and len(self.bloom) >= len(self): self.do_bloom = True else: self.bloom = None debug1('PackIdxList: using %d index%s.\n' % (len(self.packs), len(self.packs)!=1 and 'es' or '')) def add(self, hash): """Insert an additional object in the list.""" self.also.add(hash) def open_idx(filename): if filename.endswith('.idx'): f = open(filename, 'rb') header = f.read(8) if header[0:4] == '\377tOc': version = struct.unpack('!I', header[4:8])[0] if version == 2: return PackIdxV2(filename, f) else: raise GitError('%s: expected idx file version 2, got %d' % (filename, version)) elif len(header) == 8 and header[0:4] < '\377tOc': return PackIdxV1(filename, f) else: raise GitError('%s: unrecognized idx file header' % filename) elif filename.endswith('.midx'): return midx.PackMidx(filename) else: raise GitError('idx filenames must end with .idx or .midx') def idxmerge(idxlist, final_progress=True): """Generate a list of all the objects reachable in a PackIdxList.""" def pfunc(count, total): qprogress('Reading indexes: %.2f%% (%d/%d)\r' % (count*100.0/total, count, total)) def pfinal(count, total): if final_progress: progress('Reading indexes: %.2f%% (%d/%d), done.\n' % (100, total, total)) return merge_iter(idxlist, 10024, pfunc, pfinal) def _make_objcache(): return PackIdxList(repo('objects/pack')) # bup-gc assumes that it can disable all PackWriter activities # (bloom/midx/cache) via the constructor and close() arguments. class PackWriter: """Writes Git objects inside a pack file.""" def __init__(self, objcache_maker=_make_objcache, compression_level=1, run_midx=True, on_pack_finish=None): self.file = None self.parentfd = None self.count = 0 self.outbytes = 0 self.filename = None self.idx = None self.objcache_maker = objcache_maker self.objcache = None self.compression_level = compression_level self.run_midx=run_midx self.on_pack_finish = on_pack_finish def __del__(self): self.close() def _open(self): if not self.file: objdir = dir=repo('objects') fd, name = tempfile.mkstemp(suffix='.pack', dir=objdir) try: self.file = os.fdopen(fd, 'w+b') except: os.close(fd) raise try: self.parentfd = os.open(objdir, os.O_RDONLY) except: f = self.file self.file = None f.close() raise assert(name.endswith('.pack')) self.filename = name[:-5] self.file.write('PACK\0\0\0\2\0\0\0\0') self.idx = list(list() for i in xrange(256)) def _raw_write(self, datalist, sha): self._open() f = self.file # in case we get interrupted (eg. KeyboardInterrupt), it's best if # the file never has a *partial* blob. So let's make sure it's # all-or-nothing. (The blob shouldn't be very big anyway, thanks # to our hashsplit algorithm.) f.write() does its own buffering, # but that's okay because we'll flush it in _end(). oneblob = ''.join(datalist) try: f.write(oneblob) except IOError as e: raise GitError, e, sys.exc_info()[2] nw = len(oneblob) crc = zlib.crc32(oneblob) & 0xffffffff self._update_idx(sha, crc, nw) self.outbytes += nw self.count += 1 return nw, crc def _update_idx(self, sha, crc, size): assert(sha) if self.idx: self.idx[ord(sha[0])].append((sha, crc, self.file.tell() - size)) def _write(self, sha, type, content): if verbose: log('>') if not sha: sha = calc_hash(type, content) size, crc = self._raw_write(_encode_packobj(type, content, self.compression_level), sha=sha) if self.outbytes >= max_pack_size or self.count >= max_pack_objects: self.breakpoint() return sha def breakpoint(self): """Clear byte and object counts and return the last processed id.""" id = self._end(self.run_midx) self.outbytes = self.count = 0 return id def _require_objcache(self): if self.objcache is None and self.objcache_maker: self.objcache = self.objcache_maker() if self.objcache is None: raise GitError( "PackWriter not opened or can't check exists w/o objcache") def exists(self, id, want_source=False): """Return non-empty if an object is found in the object cache.""" self._require_objcache() return self.objcache.exists(id, want_source=want_source) def just_write(self, sha, type, content): """Write an object to the pack file, bypassing the objcache. Fails if sha exists().""" self._write(sha, type, content) def maybe_write(self, type, content): """Write an object to the pack file if not present and return its id.""" sha = calc_hash(type, content) if not self.exists(sha): self.just_write(sha, type, content) self._require_objcache() self.objcache.add(sha) return sha def new_blob(self, blob): """Create a blob object in the pack with the supplied content.""" return self.maybe_write('blob', blob) def new_tree(self, shalist): """Create a tree object in the pack.""" content = tree_encode(shalist) return self.maybe_write('tree', content) def new_commit(self, tree, parent, author, adate_sec, adate_tz, committer, cdate_sec, cdate_tz, msg): """Create a commit object in the pack. The date_sec values must be epoch-seconds, and if a tz is None, the local timezone is assumed.""" if adate_tz: adate_str = _git_date_str(adate_sec, adate_tz) else: adate_str = _local_git_date_str(adate_sec) if cdate_tz: cdate_str = _git_date_str(cdate_sec, cdate_tz) else: cdate_str = _local_git_date_str(cdate_sec) l = [] if tree: l.append('tree %s' % tree.encode('hex')) if parent: l.append('parent %s' % parent.encode('hex')) if author: l.append('author %s %s' % (author, adate_str)) if committer: l.append('committer %s %s' % (committer, cdate_str)) l.append('') l.append(msg) return self.maybe_write('commit', '\n'.join(l)) def abort(self): """Remove the pack file from disk.""" f = self.file if f: pfd = self.parentfd self.file = None self.parentfd = None self.idx = None try: try: os.unlink(self.filename + '.pack') finally: f.close() finally: if pfd is not None: os.close(pfd) def _end(self, run_midx=True): f = self.file if not f: return None self.file = None try: self.objcache = None idx = self.idx self.idx = None # update object count f.seek(8) cp = struct.pack('!i', self.count) assert(len(cp) == 4) f.write(cp) # calculate the pack sha1sum f.seek(0) sum = Sha1() for b in chunkyreader(f): sum.update(b) packbin = sum.digest() f.write(packbin) fdatasync(f.fileno()) finally: f.close() obj_list_sha = self._write_pack_idx_v2(self.filename + '.idx', idx, packbin) nameprefix = repo('objects/pack/pack-%s' % obj_list_sha) if os.path.exists(self.filename + '.map'): os.unlink(self.filename + '.map') os.rename(self.filename + '.pack', nameprefix + '.pack') os.rename(self.filename + '.idx', nameprefix + '.idx') try: os.fsync(self.parentfd) finally: os.close(self.parentfd) if run_midx: auto_midx(repo('objects/pack')) if self.on_pack_finish: self.on_pack_finish(nameprefix) return nameprefix def close(self, run_midx=True): """Close the pack file and move it to its definitive path.""" return self._end(run_midx=run_midx) def _write_pack_idx_v2(self, filename, idx, packbin): ofs64_count = 0 for section in idx: for entry in section: if entry[2] >= 2**31: ofs64_count += 1 # Length: header + fan-out + shas-and-crcs + overflow-offsets index_len = 8 + (4 * 256) + (28 * self.count) + (8 * ofs64_count) idx_map = None idx_f = open(filename, 'w+b') try: idx_f.truncate(index_len) fdatasync(idx_f.fileno()) idx_map = mmap_readwrite(idx_f, close=False) try: count = _helpers.write_idx(filename, idx_map, idx, self.count) assert(count == self.count) idx_map.flush() finally: idx_map.close() finally: idx_f.close() idx_f = open(filename, 'a+b') try: idx_f.write(packbin) idx_f.seek(0) idx_sum = Sha1() b = idx_f.read(8 + 4*256) idx_sum.update(b) obj_list_sum = Sha1() for b in chunkyreader(idx_f, 20*self.count): idx_sum.update(b) obj_list_sum.update(b) namebase = obj_list_sum.hexdigest() for b in chunkyreader(idx_f): idx_sum.update(b) idx_f.write(idx_sum.digest()) fdatasync(idx_f.fileno()) return namebase finally: idx_f.close() def _gitenv(repo_dir = None): if not repo_dir: repo_dir = repo() def env(): os.environ['GIT_DIR'] = os.path.abspath(repo_dir) return env def list_refs(refnames=None, repo_dir=None, limit_to_heads=False, limit_to_tags=False): """Yield (refname, hash) tuples for all repository refs unless refnames are specified. In that case, only include tuples for those refs. The limits restrict the result items to refs/heads or refs/tags. If both limits are specified, items from both sources will be included. """ argv = ['git', 'show-ref'] if limit_to_heads: argv.append('--heads') if limit_to_tags: argv.append('--tags') argv.append('--') if refnames: argv += refnames p = subprocess.Popen(argv, preexec_fn = _gitenv(repo_dir), stdout = subprocess.PIPE) out = p.stdout.read().strip() rv = p.wait() # not fatal if rv: assert(not out) if out: for d in out.split('\n'): (sha, name) = d.split(' ', 1) yield (name, sha.decode('hex')) def read_ref(refname, repo_dir = None): """Get the commit id of the most recent commit made on a given ref.""" refs = list_refs(refnames=[refname], repo_dir=repo_dir, limit_to_heads=True) l = tuple(islice(refs, 2)) if l: assert(len(l) == 1) return l[0][1] else: return None def rev_list(ref, count=None, repo_dir=None): """Generate a list of reachable commits in reverse chronological order. This generator walks through commits, from child to parent, that are reachable via the specified ref and yields a series of tuples of the form (date,hash). If count is a non-zero integer, limit the number of commits to "count" objects. """ assert(not ref.startswith('-')) opts = [] if count: opts += ['-n', str(atoi(count))] argv = ['git', 'rev-list', '--pretty=format:%at'] + opts + [ref, '--'] p = subprocess.Popen(argv, preexec_fn = _gitenv(repo_dir), stdout = subprocess.PIPE) commit = None for row in p.stdout: s = row.strip() if s.startswith('commit '): commit = s[7:].decode('hex') else: date = int(s) yield (date, commit) rv = p.wait() # not fatal if rv: raise GitError, 'git rev-list returned error %d' % rv def get_commit_dates(refs, repo_dir=None): """Get the dates for the specified commit refs. For now, every unique string in refs must resolve to a different commit or this function will fail.""" result = [] for ref in refs: commit = get_commit_items(ref, cp(repo_dir)) result.append(commit.author_sec) return result def rev_parse(committish, repo_dir=None): """Resolve the full hash for 'committish', if it exists. Should be roughly equivalent to 'git rev-parse'. Returns the hex value of the hash if it is found, None if 'committish' does not correspond to anything. """ head = read_ref(committish, repo_dir=repo_dir) if head: debug2("resolved from ref: commit = %s\n" % head.encode('hex')) return head pL = PackIdxList(repo('objects/pack', repo_dir=repo_dir)) if len(committish) == 40: try: hash = committish.decode('hex') except TypeError: return None if pL.exists(hash): return hash return None def update_ref(refname, newval, oldval, repo_dir=None): """Update a repository reference.""" if not oldval: oldval = '' assert(refname.startswith('refs/heads/') \ or refname.startswith('refs/tags/')) p = subprocess.Popen(['git', 'update-ref', refname, newval.encode('hex'), oldval.encode('hex')], preexec_fn = _gitenv(repo_dir)) _git_wait('git update-ref', p) def delete_ref(refname, oldvalue=None): """Delete a repository reference (see git update-ref(1)).""" assert(refname.startswith('refs/')) oldvalue = [] if not oldvalue else [oldvalue] p = subprocess.Popen(['git', 'update-ref', '-d', refname] + oldvalue, preexec_fn = _gitenv()) _git_wait('git update-ref', p) def guess_repo(path=None): """Set the path value in the global variable "repodir". This makes bup look for an existing bup repository, but not fail if a repository doesn't exist. Usually, if you are interacting with a bup repository, you would not be calling this function but using check_repo_or_die(). """ global repodir if path: repodir = path if not repodir: repodir = os.environ.get('BUP_DIR') if not repodir: repodir = os.path.expanduser('~/.bup') def init_repo(path=None): """Create the Git bare repository for bup in a given path.""" guess_repo(path) d = repo() # appends a / to the path parent = os.path.dirname(os.path.dirname(d)) if parent and not os.path.exists(parent): raise GitError('parent directory "%s" does not exist\n' % parent) if os.path.exists(d) and not os.path.isdir(os.path.join(d, '.')): raise GitError('"%s" exists but is not a directory\n' % d) p = subprocess.Popen(['git', '--bare', 'init'], stdout=sys.stderr, preexec_fn = _gitenv()) _git_wait('git init', p) # Force the index version configuration in order to ensure bup works # regardless of the version of the installed Git binary. p = subprocess.Popen(['git', 'config', 'pack.indexVersion', '2'], stdout=sys.stderr, preexec_fn = _gitenv()) _git_wait('git config', p) # Enable the reflog p = subprocess.Popen(['git', 'config', 'core.logAllRefUpdates', 'true'], stdout=sys.stderr, preexec_fn = _gitenv()) _git_wait('git config', p) def check_repo_or_die(path=None): """Check to see if a bup repository probably exists, and abort if not.""" guess_repo(path) top = repo() pst = stat_if_exists(top + '/objects/pack') if pst and stat.S_ISDIR(pst.st_mode): return if not pst: top_st = stat_if_exists(top) if not top_st: log('error: repository %r does not exist (see "bup help init")\n' % top) sys.exit(15) log('error: %r is not a repository\n' % top) sys.exit(14) _ver = None def ver(): """Get Git's version and ensure a usable version is installed. The returned version is formatted as an ordered tuple with each position representing a digit in the version tag. For example, the following tuple would represent version 1.6.6.9: ('1', '6', '6', '9') """ global _ver if not _ver: p = subprocess.Popen(['git', '--version'], stdout=subprocess.PIPE) gvs = p.stdout.read() _git_wait('git --version', p) m = re.match(r'git version (\S+.\S+)', gvs) if not m: raise GitError('git --version weird output: %r' % gvs) _ver = tuple(m.group(1).split('.')) needed = ('1','5', '3', '1') if _ver < needed: raise GitError('git version %s or higher is required; you have %s' % ('.'.join(needed), '.'.join(_ver))) return _ver def _git_wait(cmd, p): rv = p.wait() if rv != 0: raise GitError('%s returned %d' % (cmd, rv)) def _git_capture(argv): p = subprocess.Popen(argv, stdout=subprocess.PIPE, preexec_fn = _gitenv()) r = p.stdout.read() _git_wait(repr(argv), p) return r class _AbortableIter: def __init__(self, it, onabort = None): self.it = it self.onabort = onabort self.done = None def __iter__(self): return self def next(self): try: return self.it.next() except StopIteration as e: self.done = True raise except: self.abort() raise def abort(self): """Abort iteration and call the abortion callback, if needed.""" if not self.done: self.done = True if self.onabort: self.onabort() def __del__(self): self.abort() class MissingObject(KeyError): def __init__(self, id): self.id = id KeyError.__init__(self, 'object %r is missing' % id.encode('hex')) _ver_warned = 0 class CatPipe: """Link to 'git cat-file' that is used to retrieve blob data.""" def __init__(self, repo_dir = None): global _ver_warned self.repo_dir = repo_dir wanted = ('1','5','6') if ver() < wanted: if not _ver_warned: log('warning: git version < %s; bup will be slow.\n' % '.'.join(wanted)) _ver_warned = 1 self.get = self._slow_get else: self.p = self.inprogress = None self.get = self._fast_get def _abort(self): if self.p: self.p.stdout.close() self.p.stdin.close() self.p = None self.inprogress = None def restart(self): self._abort() self.p = subprocess.Popen(['git', 'cat-file', '--batch'], stdin=subprocess.PIPE, stdout=subprocess.PIPE, close_fds = True, bufsize = 4096, preexec_fn = _gitenv(self.repo_dir)) def _fast_get(self, id): if not self.p or self.p.poll() != None: self.restart() assert(self.p) poll_result = self.p.poll() assert(poll_result == None) if self.inprogress: log('_fast_get: opening %r while %r is open\n' % (id, self.inprogress)) assert(not self.inprogress) assert(id.find('\n') < 0) assert(id.find('\r') < 0) assert(not id.startswith('-')) self.inprogress = id self.p.stdin.write('%s\n' % id) self.p.stdin.flush() hdr = self.p.stdout.readline() if hdr.endswith(' missing\n'): self.inprogress = None raise MissingObject(id.decode('hex')) spl = hdr.split(' ') if len(spl) != 3 or len(spl[0]) != 40: raise GitError('expected blob, got %r' % spl) (hex, type, size) = spl it = _AbortableIter(chunkyreader(self.p.stdout, int(spl[2])), onabort = self._abort) try: yield type for blob in it: yield blob readline_result = self.p.stdout.readline() assert(readline_result == '\n') self.inprogress = None except Exception as e: it.abort() raise def _slow_get(self, id): assert(id.find('\n') < 0) assert(id.find('\r') < 0) assert(id[0] != '-') type = _git_capture(['git', 'cat-file', '-t', id]).strip() yield type p = subprocess.Popen(['git', 'cat-file', type, id], stdout=subprocess.PIPE, preexec_fn = _gitenv(self.repo_dir)) for blob in chunkyreader(p.stdout): yield blob _git_wait('git cat-file', p) def _join(self, it): type = it.next() if type == 'blob': for blob in it: yield blob elif type == 'tree': treefile = ''.join(it) for (mode, name, sha) in tree_decode(treefile): for blob in self.join(sha.encode('hex')): yield blob elif type == 'commit': treeline = ''.join(it).split('\n')[0] assert(treeline.startswith('tree ')) for blob in self.join(treeline[5:]): yield blob else: raise GitError('invalid object type %r: expected blob/tree/commit' % type) def join(self, id): """Generate a list of the content of all blobs that can be reached from an object. The hash given in 'id' must point to a blob, a tree or a commit. The content of all blobs that can be seen from trees or commits will be added to the list. """ try: for d in self._join(self.get(id)): yield d except StopIteration: log('booger!\n') _cp = {} def cp(repo_dir=None): """Create a CatPipe object or reuse the already existing one.""" global _cp, repodir if not repo_dir: repo_dir = repodir or repo() repo_dir = os.path.abspath(repo_dir) cp = _cp.get(repo_dir) if not cp: cp = CatPipe(repo_dir) _cp[repo_dir] = cp return cp def tags(repo_dir = None): """Return a dictionary of all tags in the form {hash: [tag_names, ...]}.""" tags = {} for n, c in list_refs(repo_dir = repo_dir, limit_to_tags=True): assert(n.startswith('refs/tags/')) name = n[10:] if not c in tags: tags[c] = [] tags[c].append(name) # more than one tag can point at 'c' return tags WalkItem = namedtuple('WalkItem', ['id', 'type', 'mode', 'path', 'chunk_path', 'data']) # The path is the mangled path, and if an item represents a fragment # of a chunked file, the chunk_path will be the chunked subtree path # for the chunk, i.e. ['', '2d3115e', ...]. The top-level path for a # chunked file will have a chunk_path of ['']. So some chunk subtree # of the file '/foo/bar/baz' might look like this: # # item.path = ['foo', 'bar', 'baz.bup'] # item.chunk_path = ['', '2d3115e', '016b097'] # item.type = 'tree' # ... def walk_object(cat_pipe, id, stop_at=None, include_data=None): """Yield everything reachable from id via cat_pipe as a WalkItem, stopping whenever stop_at(id) returns true. Throw MissingObject if a hash encountered is missing from the repository, and don't read or return blob content in the data field unless include_data is set. """ # Maintain the pending stack on the heap to avoid stack overflow pending = [(id, [], [], None)] while len(pending): id, parent_path, chunk_path, mode = pending.pop() if stop_at and stop_at(id): continue if (not include_data) and mode and stat.S_ISREG(mode): # If the object is a "regular file", then it's a leaf in # the graph, so we can skip reading the data if the caller # hasn't requested it. yield WalkItem(id=id, type='blob', chunk_path=chunk_path, path=parent_path, mode=mode, data=None) continue item_it = cat_pipe.get(id) type = item_it.next() if type not in ('blob', 'commit', 'tree'): raise Exception('unexpected repository object type %r' % type) # FIXME: set the mode based on the type when the mode is None if type == 'blob' and not include_data: # Dump data until we can ask cat_pipe not to fetch it for ignored in item_it: pass data = None else: data = ''.join(item_it) yield WalkItem(id=id, type=type, chunk_path=chunk_path, path=parent_path, mode=mode, data=(data if include_data else None)) if type == 'commit': commit_items = parse_commit(data) for pid in commit_items.parents: pending.append((pid, parent_path, chunk_path, mode)) pending.append((commit_items.tree, parent_path, chunk_path, hashsplit.GIT_MODE_TREE)) elif type == 'tree': for mode, name, ent_id in tree_decode(data): demangled, bup_type = demangle_name(name, mode) if chunk_path: sub_path = parent_path sub_chunk_path = chunk_path + [name] else: sub_path = parent_path + [name] if bup_type == BUP_CHUNKED: sub_chunk_path = [''] else: sub_chunk_path = chunk_path pending.append((ent_id.encode('hex'), sub_path, sub_chunk_path, mode)) bup-0.29/lib/bup/hashsplit.py000066400000000000000000000175301303127641400161460ustar00rootroot00000000000000import io, math, os from bup import _helpers, helpers from bup.helpers import sc_page_size _fmincore = getattr(helpers, 'fmincore', None) BLOB_MAX = 8192*4 # 8192 is the "typical" blob size for bupsplit BLOB_READ_SIZE = 1024*1024 MAX_PER_TREE = 256 progress_callback = None fanout = 16 GIT_MODE_FILE = 0100644 GIT_MODE_TREE = 040000 GIT_MODE_SYMLINK = 0120000 assert(GIT_MODE_TREE != 40000) # 0xxx should be treated as octal # The purpose of this type of buffer is to avoid copying on peek(), get(), # and eat(). We do copy the buffer contents on put(), but that should # be ok if we always only put() large amounts of data at a time. class Buf: def __init__(self): self.data = '' self.start = 0 def put(self, s): if s: self.data = buffer(self.data, self.start) + s self.start = 0 def peek(self, count): return buffer(self.data, self.start, count) def eat(self, count): self.start += count def get(self, count): v = buffer(self.data, self.start, count) self.start += count return v def used(self): return len(self.data) - self.start def _fadvise_pages_done(fd, first_page, count): assert(first_page >= 0) assert(count >= 0) if count > 0: _helpers.fadvise_done(fd, first_page * sc_page_size, count * sc_page_size) def _nonresident_page_regions(status_bytes, incore_mask, max_region_len=None): """Return (start_page, count) pairs in ascending start_page order for each contiguous region of nonresident pages indicated by the mincore() status_bytes. Limit the number of pages in each region to max_region_len.""" assert(max_region_len is None or max_region_len > 0) start = None for i, x in enumerate(status_bytes): in_core = x & incore_mask if start is None: if not in_core: start = i else: count = i - start if in_core: yield (start, count) start = None elif max_region_len and count >= max_region_len: yield (start, count) start = i if start is not None: yield (start, len(status_bytes) - start) def _uncache_ours_upto(fd, offset, first_region, remaining_regions): """Uncache the pages of fd indicated by first_region and remaining_regions that are before offset, where each region is a (start_page, count) pair. The final region must have a start_page of None.""" rstart, rlen = first_region while rstart is not None and (rstart + rlen) * sc_page_size <= offset: _fadvise_pages_done(fd, rstart, rlen) rstart, rlen = next(remaining_regions, (None, None)) return (rstart, rlen) def readfile_iter(files, progress=None): for filenum,f in enumerate(files): ofs = 0 b = '' fd = rpr = rstart = rlen = None if _fmincore and hasattr(f, 'fileno'): try: fd = f.fileno() except io.UnsupportedOperation: pass if fd: mcore = _fmincore(fd) if mcore: max_chunk = max(1, (8 * 1024 * 1024) / sc_page_size) rpr = _nonresident_page_regions(mcore, helpers.MINCORE_INCORE, max_chunk) rstart, rlen = next(rpr, (None, None)) while 1: if progress: progress(filenum, len(b)) b = f.read(BLOB_READ_SIZE) ofs += len(b) if rpr: rstart, rlen = _uncache_ours_upto(fd, ofs, (rstart, rlen), rpr) if not b: break yield b if rpr: rstart, rlen = _uncache_ours_upto(fd, ofs, (rstart, rlen), rpr) def _splitbuf(buf, basebits, fanbits): while 1: b = buf.peek(buf.used()) (ofs, bits) = _helpers.splitbuf(b) if ofs: if ofs > BLOB_MAX: ofs = BLOB_MAX level = 0 else: level = (bits-basebits)//fanbits # integer division buf.eat(ofs) yield buffer(b, 0, ofs), level else: break while buf.used() >= BLOB_MAX: # limit max blob size yield buf.get(BLOB_MAX), 0 def _hashsplit_iter(files, progress): assert(BLOB_READ_SIZE > BLOB_MAX) basebits = _helpers.blobbits() fanbits = int(math.log(fanout or 128, 2)) buf = Buf() for inblock in readfile_iter(files, progress): buf.put(inblock) for buf_and_level in _splitbuf(buf, basebits, fanbits): yield buf_and_level if buf.used(): yield buf.get(buf.used()), 0 def _hashsplit_iter_keep_boundaries(files, progress): for real_filenum,f in enumerate(files): if progress: def prog(filenum, nbytes): # the inner _hashsplit_iter doesn't know the real file count, # so we'll replace it here. return progress(real_filenum, nbytes) else: prog = None for buf_and_level in _hashsplit_iter([f], progress=prog): yield buf_and_level def hashsplit_iter(files, keep_boundaries, progress): if keep_boundaries: return _hashsplit_iter_keep_boundaries(files, progress) else: return _hashsplit_iter(files, progress) total_split = 0 def split_to_blobs(makeblob, files, keep_boundaries, progress): global total_split for (blob, level) in hashsplit_iter(files, keep_boundaries, progress): sha = makeblob(blob) total_split += len(blob) if progress_callback: progress_callback(len(blob)) yield (sha, len(blob), level) def _make_shalist(l): ofs = 0 l = list(l) total = sum(size for mode,sha,size, in l) vlen = len('%x' % total) shalist = [] for (mode, sha, size) in l: shalist.append((mode, '%0*x' % (vlen,ofs), sha)) ofs += size assert(ofs == total) return (shalist, total) def _squish(maketree, stacks, n): i = 0 while i < n or len(stacks[i]) >= MAX_PER_TREE: while len(stacks) <= i+1: stacks.append([]) if len(stacks[i]) == 1: stacks[i+1] += stacks[i] elif stacks[i]: (shalist, size) = _make_shalist(stacks[i]) tree = maketree(shalist) stacks[i+1].append((GIT_MODE_TREE, tree, size)) stacks[i] = [] i += 1 def split_to_shalist(makeblob, maketree, files, keep_boundaries, progress=None): sl = split_to_blobs(makeblob, files, keep_boundaries, progress) assert(fanout != 0) if not fanout: shal = [] for (sha,size,level) in sl: shal.append((GIT_MODE_FILE, sha, size)) return _make_shalist(shal)[0] else: stacks = [[]] for (sha,size,level) in sl: stacks[0].append((GIT_MODE_FILE, sha, size)) _squish(maketree, stacks, level) #log('stacks: %r\n' % [len(i) for i in stacks]) _squish(maketree, stacks, len(stacks)-1) #log('stacks: %r\n' % [len(i) for i in stacks]) return _make_shalist(stacks[-1])[0] def split_to_blob_or_tree(makeblob, maketree, files, keep_boundaries, progress=None): shalist = list(split_to_shalist(makeblob, maketree, files, keep_boundaries, progress)) if len(shalist) == 1: return (shalist[0][0], shalist[0][2]) elif len(shalist) == 0: return (GIT_MODE_FILE, makeblob('')) else: return (GIT_MODE_TREE, maketree(shalist)) def open_noatime(name): fd = _helpers.open_noatime(name) try: return os.fdopen(fd, 'rb', 1024*1024) except: try: os.close(fd) except: pass raise bup-0.29/lib/bup/helpers.py000066400000000000000000001077701303127641400156170ustar00rootroot00000000000000"""Helper functions and classes for bup.""" from collections import namedtuple from ctypes import sizeof, c_void_p from os import environ from contextlib import contextmanager import sys, os, pwd, subprocess, errno, socket, select, mmap, stat, re, struct import hashlib, heapq, math, operator, time, grp, tempfile from bup import _helpers class Nonlocal: """Helper to deal with Python scoping issues""" pass sc_page_size = os.sysconf('SC_PAGE_SIZE') assert(sc_page_size > 0) sc_arg_max = os.sysconf('SC_ARG_MAX') if sc_arg_max == -1: # "no definite limit" - let's choose 2M sc_arg_max = 2 * 1024 * 1024 # This function should really be in helpers, not in bup.options. But we # want options.py to be standalone so people can include it in other projects. from bup.options import _tty_width tty_width = _tty_width def atoi(s): """Convert the string 's' to an integer. Return 0 if s is not a number.""" try: return int(s or '0') except ValueError: return 0 def atof(s): """Convert the string 's' to a float. Return 0 if s is not a number.""" try: return float(s or '0') except ValueError: return 0 buglvl = atoi(os.environ.get('BUP_DEBUG', 0)) try: _fdatasync = os.fdatasync except AttributeError: _fdatasync = os.fsync if sys.platform.startswith('darwin'): # Apparently os.fsync on OS X doesn't guarantee to sync all the way down import fcntl def fdatasync(fd): try: return fcntl.fcntl(fd, fcntl.F_FULLFSYNC) except IOError as e: # Fallback for file systems (SMB) that do not support F_FULLFSYNC if e.errno == errno.ENOTSUP: return _fdatasync(fd) else: raise else: fdatasync = _fdatasync def partition(predicate, stream): """Returns (leading_matches_it, rest_it), where leading_matches_it must be completely exhausted before traversing rest_it. """ stream = iter(stream) ns = Nonlocal() ns.first_nonmatch = None def leading_matches(): for x in stream: if predicate(x): yield x else: ns.first_nonmatch = (x,) break def rest(): if ns.first_nonmatch: yield ns.first_nonmatch[0] for x in stream: yield x return (leading_matches(), rest()) def stat_if_exists(path): try: return os.stat(path) except OSError as e: if e.errno != errno.ENOENT: raise return None # Write (blockingly) to sockets that may or may not be in blocking mode. # We need this because our stderr is sometimes eaten by subprocesses # (probably ssh) that sometimes make it nonblocking, if only temporarily, # leading to race conditions. Ick. We'll do it the hard way. def _hard_write(fd, buf): while buf: (r,w,x) = select.select([], [fd], [], None) if not w: raise IOError('select(fd) returned without being writable') try: sz = os.write(fd, buf) except OSError as e: if e.errno != errno.EAGAIN: raise assert(sz >= 0) buf = buf[sz:] _last_prog = 0 def log(s): """Print a log message to stderr.""" global _last_prog sys.stdout.flush() _hard_write(sys.stderr.fileno(), s) _last_prog = 0 def debug1(s): if buglvl >= 1: log(s) def debug2(s): if buglvl >= 2: log(s) istty1 = os.isatty(1) or (atoi(os.environ.get('BUP_FORCE_TTY')) & 1) istty2 = os.isatty(2) or (atoi(os.environ.get('BUP_FORCE_TTY')) & 2) _last_progress = '' def progress(s): """Calls log() if stderr is a TTY. Does nothing otherwise.""" global _last_progress if istty2: log(s) _last_progress = s def qprogress(s): """Calls progress() only if we haven't printed progress in a while. This avoids overloading the stderr buffer with excess junk. """ global _last_prog now = time.time() if now - _last_prog > 0.1: progress(s) _last_prog = now def reprogress(): """Calls progress() to redisplay the most recent progress message. Useful after you've printed some other message that wipes out the progress line. """ if _last_progress and _last_progress.endswith('\r'): progress(_last_progress) def mkdirp(d, mode=None): """Recursively create directories on path 'd'. Unlike os.makedirs(), it doesn't raise an exception if the last element of the path already exists. """ try: if mode: os.makedirs(d, mode) else: os.makedirs(d) except OSError as e: if e.errno == errno.EEXIST: pass else: raise _unspecified_next_default = object() def _fallback_next(it, default=_unspecified_next_default): """Retrieve the next item from the iterator by calling its next() method. If default is given, it is returned if the iterator is exhausted, otherwise StopIteration is raised.""" if default is _unspecified_next_default: return it.next() else: try: return it.next() except StopIteration: return default if sys.version_info < (2, 6): next = _fallback_next def merge_iter(iters, pfreq, pfunc, pfinal, key=None): if key: samekey = lambda e, pe: getattr(e, key) == getattr(pe, key, None) else: samekey = operator.eq count = 0 total = sum(len(it) for it in iters) iters = (iter(it) for it in iters) heap = ((next(it, None),it) for it in iters) heap = [(e,it) for e,it in heap if e] heapq.heapify(heap) pe = None while heap: if not count % pfreq: pfunc(count, total) e, it = heap[0] if not samekey(e, pe): pe = e yield e count += 1 try: e = it.next() # Don't use next() function, it's too expensive except StopIteration: heapq.heappop(heap) # remove current else: heapq.heapreplace(heap, (e, it)) # shift current to new location pfinal(count, total) def unlink(f): """Delete a file at path 'f' if it currently exists. Unlike os.unlink(), does not throw an exception if the file didn't already exist. """ try: os.unlink(f) except OSError as e: if e.errno != errno.ENOENT: raise def readpipe(argv, preexec_fn=None, shell=False): """Run a subprocess and return its output.""" p = subprocess.Popen(argv, stdout=subprocess.PIPE, preexec_fn=preexec_fn, shell=shell) out, err = p.communicate() if p.returncode != 0: raise Exception('subprocess %r failed with status %d' % (' '.join(argv), p.returncode)) return out def _argmax_base(command): base_size = 2048 for c in command: base_size += len(command) + 1 for k, v in environ.iteritems(): base_size += len(k) + len(v) + 2 + sizeof(c_void_p) return base_size def _argmax_args_size(args): return sum(len(x) + 1 + sizeof(c_void_p) for x in args) def batchpipe(command, args, preexec_fn=None, arg_max=sc_arg_max): """If args is not empty, yield the output produced by calling the command list with args as a sequence of strings (It may be necessary to return multiple strings in order to respect ARG_MAX).""" # The optional arg_max arg is a workaround for an issue with the # current wvtest behavior. base_size = _argmax_base(command) while args: room = arg_max - base_size i = 0 while i < len(args): next_size = _argmax_args_size(args[i:i+1]) if room - next_size < 0: break room -= next_size i += 1 sub_args = args[:i] args = args[i:] assert(len(sub_args)) yield readpipe(command + sub_args, preexec_fn=preexec_fn) def resolve_parent(p): """Return the absolute path of a file without following any final symlink. Behaves like os.path.realpath, but doesn't follow a symlink for the last element. (ie. if 'p' itself is a symlink, this one won't follow it, but it will follow symlinks in p's directory) """ try: st = os.lstat(p) except OSError: st = None if st and stat.S_ISLNK(st.st_mode): (dir, name) = os.path.split(p) dir = os.path.realpath(dir) out = os.path.join(dir, name) else: out = os.path.realpath(p) #log('realpathing:%r,%r\n' % (p, out)) return out def detect_fakeroot(): "Return True if we appear to be running under fakeroot." return os.getenv("FAKEROOTKEY") != None _warned_about_superuser_detection = None def is_superuser(): if sys.platform.startswith('cygwin'): if sys.getwindowsversion()[0] > 5: # Sounds like situation is much more complicated here global _warned_about_superuser_detection if not _warned_about_superuser_detection: log("can't detect root status for OS version > 5; assuming not root") _warned_about_superuser_detection = True return False import ctypes return ctypes.cdll.shell32.IsUserAnAdmin() else: return os.geteuid() == 0 def _cache_key_value(get_value, key, cache): """Return (value, was_cached). If there is a value in the cache for key, use that, otherwise, call get_value(key) which should throw a KeyError if there is no value -- in which case the cached and returned value will be None. """ try: # Do we already have it (or know there wasn't one)? value = cache[key] return value, True except KeyError: pass value = None try: cache[key] = value = get_value(key) except KeyError: cache[key] = None return value, False _uid_to_pwd_cache = {} _name_to_pwd_cache = {} def pwd_from_uid(uid): """Return password database entry for uid (may be a cached value). Return None if no entry is found. """ global _uid_to_pwd_cache, _name_to_pwd_cache entry, cached = _cache_key_value(pwd.getpwuid, uid, _uid_to_pwd_cache) if entry and not cached: _name_to_pwd_cache[entry.pw_name] = entry return entry def pwd_from_name(name): """Return password database entry for name (may be a cached value). Return None if no entry is found. """ global _uid_to_pwd_cache, _name_to_pwd_cache entry, cached = _cache_key_value(pwd.getpwnam, name, _name_to_pwd_cache) if entry and not cached: _uid_to_pwd_cache[entry.pw_uid] = entry return entry _gid_to_grp_cache = {} _name_to_grp_cache = {} def grp_from_gid(gid): """Return password database entry for gid (may be a cached value). Return None if no entry is found. """ global _gid_to_grp_cache, _name_to_grp_cache entry, cached = _cache_key_value(grp.getgrgid, gid, _gid_to_grp_cache) if entry and not cached: _name_to_grp_cache[entry.gr_name] = entry return entry def grp_from_name(name): """Return password database entry for name (may be a cached value). Return None if no entry is found. """ global _gid_to_grp_cache, _name_to_grp_cache entry, cached = _cache_key_value(grp.getgrnam, name, _name_to_grp_cache) if entry and not cached: _gid_to_grp_cache[entry.gr_gid] = entry return entry _username = None def username(): """Get the user's login name.""" global _username if not _username: uid = os.getuid() _username = pwd_from_uid(uid)[0] or 'user%d' % uid return _username _userfullname = None def userfullname(): """Get the user's full name.""" global _userfullname if not _userfullname: uid = os.getuid() entry = pwd_from_uid(uid) if entry: _userfullname = entry[4].split(',')[0] or entry[0] if not _userfullname: _userfullname = 'user%d' % uid return _userfullname _hostname = None def hostname(): """Get the FQDN of this machine.""" global _hostname if not _hostname: _hostname = socket.getfqdn() return _hostname _resource_path = None def resource_path(subdir=''): global _resource_path if not _resource_path: _resource_path = os.environ.get('BUP_RESOURCE_PATH') or '.' return os.path.join(_resource_path, subdir) def format_filesize(size): unit = 1024.0 size = float(size) if size < unit: return "%d" % (size) exponent = int(math.log(size) / math.log(unit)) size_prefix = "KMGTPE"[exponent - 1] return "%.1f%s" % (size / math.pow(unit, exponent), size_prefix) class NotOk(Exception): pass class BaseConn: def __init__(self, outp): self.outp = outp def close(self): while self._read(65536): pass def read(self, size): """Read 'size' bytes from input stream.""" self.outp.flush() return self._read(size) def readline(self): """Read from input stream until a newline is found.""" self.outp.flush() return self._readline() def write(self, data): """Write 'data' to output stream.""" #log('%d writing: %d bytes\n' % (os.getpid(), len(data))) self.outp.write(data) def has_input(self): """Return true if input stream is readable.""" raise NotImplemented("Subclasses must implement has_input") def ok(self): """Indicate end of output from last sent command.""" self.write('\nok\n') def error(self, s): """Indicate server error to the client.""" s = re.sub(r'\s+', ' ', str(s)) self.write('\nerror %s\n' % s) def _check_ok(self, onempty): self.outp.flush() rl = '' for rl in linereader(self): #log('%d got line: %r\n' % (os.getpid(), rl)) if not rl: # empty line continue elif rl == 'ok': return None elif rl.startswith('error '): #log('client: error: %s\n' % rl[6:]) return NotOk(rl[6:]) else: onempty(rl) raise Exception('server exited unexpectedly; see errors above') def drain_and_check_ok(self): """Remove all data for the current command from input stream.""" def onempty(rl): pass return self._check_ok(onempty) def check_ok(self): """Verify that server action completed successfully.""" def onempty(rl): raise Exception('expected "ok", got %r' % rl) return self._check_ok(onempty) class Conn(BaseConn): def __init__(self, inp, outp): BaseConn.__init__(self, outp) self.inp = inp def _read(self, size): return self.inp.read(size) def _readline(self): return self.inp.readline() def has_input(self): [rl, wl, xl] = select.select([self.inp.fileno()], [], [], 0) if rl: assert(rl[0] == self.inp.fileno()) return True else: return None def checked_reader(fd, n): while n > 0: rl, _, _ = select.select([fd], [], []) assert(rl[0] == fd) buf = os.read(fd, n) if not buf: raise Exception("Unexpected EOF reading %d more bytes" % n) yield buf n -= len(buf) MAX_PACKET = 128 * 1024 def mux(p, outfd, outr, errr): try: fds = [outr, errr] while p.poll() is None: rl, _, _ = select.select(fds, [], []) for fd in rl: if fd == outr: buf = os.read(outr, MAX_PACKET) if not buf: break os.write(outfd, struct.pack('!IB', len(buf), 1) + buf) elif fd == errr: buf = os.read(errr, 1024) if not buf: break os.write(outfd, struct.pack('!IB', len(buf), 2) + buf) finally: os.write(outfd, struct.pack('!IB', 0, 3)) class DemuxConn(BaseConn): """A helper class for bup's client-server protocol.""" def __init__(self, infd, outp): BaseConn.__init__(self, outp) # Anything that comes through before the sync string was not # multiplexed and can be assumed to be debug/log before mux init. tail = '' while tail != 'BUPMUX': b = os.read(infd, (len(tail) < 6) and (6-len(tail)) or 1) if not b: raise IOError('demux: unexpected EOF during initialization') tail += b sys.stderr.write(tail[:-6]) # pre-mux log messages tail = tail[-6:] self.infd = infd self.reader = None self.buf = None self.closed = False def write(self, data): self._load_buf(0) BaseConn.write(self, data) def _next_packet(self, timeout): if self.closed: return False rl, wl, xl = select.select([self.infd], [], [], timeout) if not rl: return False assert(rl[0] == self.infd) ns = ''.join(checked_reader(self.infd, 5)) n, fdw = struct.unpack('!IB', ns) assert(n <= MAX_PACKET) if fdw == 1: self.reader = checked_reader(self.infd, n) elif fdw == 2: for buf in checked_reader(self.infd, n): sys.stderr.write(buf) elif fdw == 3: self.closed = True debug2("DemuxConn: marked closed\n") return True def _load_buf(self, timeout): if self.buf is not None: return True while not self.closed: while not self.reader: if not self._next_packet(timeout): return False try: self.buf = self.reader.next() return True except StopIteration: self.reader = None return False def _read_parts(self, ix_fn): while self._load_buf(None): assert(self.buf is not None) i = ix_fn(self.buf) if i is None or i == len(self.buf): yv = self.buf self.buf = None else: yv = self.buf[:i] self.buf = self.buf[i:] yield yv if i is not None: break def _readline(self): def find_eol(buf): try: return buf.index('\n')+1 except ValueError: return None return ''.join(self._read_parts(find_eol)) def _read(self, size): csize = [size] def until_size(buf): # Closes on csize if len(buf) < csize[0]: csize[0] -= len(buf) return None else: return csize[0] return ''.join(self._read_parts(until_size)) def has_input(self): return self._load_buf(0) def linereader(f): """Generate a list of input lines from 'f' without terminating newlines.""" while 1: line = f.readline() if not line: break yield line[:-1] def chunkyreader(f, count = None): """Generate a list of chunks of data read from 'f'. If count is None, read until EOF is reached. If count is a positive integer, read 'count' bytes from 'f'. If EOF is reached while reading, raise IOError. """ if count != None: while count > 0: b = f.read(min(count, 65536)) if not b: raise IOError('EOF with %d bytes remaining' % count) yield b count -= len(b) else: while 1: b = f.read(65536) if not b: break yield b @contextmanager def atomically_replaced_file(name, mode='w', buffering=-1): """Yield a file that will be atomically renamed name when leaving the block. This contextmanager yields an open file object that is backed by a temporary file which will be renamed (atomically) to the target name if everything succeeds. The mode and buffering arguments are handled exactly as with open, and the yielded file will have very restrictive permissions, as per mkstemp. E.g.:: with atomically_replaced_file('foo.txt', 'w') as f: f.write('hello jack.') """ (ffd, tempname) = tempfile.mkstemp(dir=os.path.dirname(name), text=('b' not in mode)) try: try: f = os.fdopen(ffd, mode, buffering) except: os.close(ffd) raise try: yield f finally: f.close() os.rename(tempname, name) finally: unlink(tempname) # nonexistant file is ignored def slashappend(s): """Append "/" to 's' if it doesn't aleady end in "/".""" if s and not s.endswith('/'): return s + '/' else: return s def _mmap_do(f, sz, flags, prot, close): if not sz: st = os.fstat(f.fileno()) sz = st.st_size if not sz: # trying to open a zero-length map gives an error, but an empty # string has all the same behaviour of a zero-length map, ie. it has # no elements :) return '' map = mmap.mmap(f.fileno(), sz, flags, prot) if close: f.close() # map will persist beyond file close return map def mmap_read(f, sz = 0, close=True): """Create a read-only memory mapped region on file 'f'. If sz is 0, the region will cover the entire file. """ return _mmap_do(f, sz, mmap.MAP_PRIVATE, mmap.PROT_READ, close) def mmap_readwrite(f, sz = 0, close=True): """Create a read-write memory mapped region on file 'f'. If sz is 0, the region will cover the entire file. """ return _mmap_do(f, sz, mmap.MAP_SHARED, mmap.PROT_READ|mmap.PROT_WRITE, close) def mmap_readwrite_private(f, sz = 0, close=True): """Create a read-write memory mapped region on file 'f'. If sz is 0, the region will cover the entire file. The map is private, which means the changes are never flushed back to the file. """ return _mmap_do(f, sz, mmap.MAP_PRIVATE, mmap.PROT_READ|mmap.PROT_WRITE, close) _mincore = getattr(_helpers, 'mincore', None) if _mincore: # ./configure ensures that we're on Linux if MINCORE_INCORE isn't defined. MINCORE_INCORE = getattr(_helpers, 'MINCORE_INCORE', 1) _fmincore_chunk_size = None def _set_fmincore_chunk_size(): global _fmincore_chunk_size pref_chunk_size = 64 * 1024 * 1024 chunk_size = sc_page_size if (sc_page_size < pref_chunk_size): chunk_size = sc_page_size * (pref_chunk_size / sc_page_size) _fmincore_chunk_size = chunk_size def fmincore(fd): """Return the mincore() data for fd as a bytearray whose values can be tested via MINCORE_INCORE, or None if fd does not fully support the operation.""" st = os.fstat(fd) if (st.st_size == 0): return bytearray(0) if not _fmincore_chunk_size: _set_fmincore_chunk_size() pages_per_chunk = _fmincore_chunk_size / sc_page_size; page_count = (st.st_size + sc_page_size - 1) / sc_page_size; chunk_count = page_count / _fmincore_chunk_size if chunk_count < 1: chunk_count = 1 result = bytearray(page_count) for ci in xrange(chunk_count): pos = _fmincore_chunk_size * ci; msize = min(_fmincore_chunk_size, st.st_size - pos) try: m = mmap.mmap(fd, msize, mmap.MAP_PRIVATE, 0, 0, pos) except mmap.error as ex: if ex.errno == errno.EINVAL or ex.errno == errno.ENODEV: # Perhaps the file was a pipe, i.e. "... | bup split ..." return None raise ex _mincore(m, msize, 0, result, ci * pages_per_chunk); return result def parse_timestamp(epoch_str): """Return the number of nanoseconds since the epoch that are described by epoch_str (100ms, 100ns, ...); when epoch_str cannot be parsed, throw a ValueError that may contain additional information.""" ns_per = {'s' : 1000000000, 'ms' : 1000000, 'us' : 1000, 'ns' : 1} match = re.match(r'^((?:[-+]?[0-9]+)?)(s|ms|us|ns)$', epoch_str) if not match: if re.match(r'^([-+]?[0-9]+)$', epoch_str): raise ValueError('must include units, i.e. 100ns, 100ms, ...') raise ValueError() (n, units) = match.group(1, 2) if not n: n = 1 n = int(n) return n * ns_per[units] def parse_num(s): """Parse data size information into a float number. Here are some examples of conversions: 199.2k means 203981 bytes 1GB means 1073741824 bytes 2.1 tb means 2199023255552 bytes """ g = re.match(r'([-+\d.e]+)\s*(\w*)', str(s)) if not g: raise ValueError("can't parse %r as a number" % s) (val, unit) = g.groups() num = float(val) unit = unit.lower() if unit in ['t', 'tb']: mult = 1024*1024*1024*1024 elif unit in ['g', 'gb']: mult = 1024*1024*1024 elif unit in ['m', 'mb']: mult = 1024*1024 elif unit in ['k', 'kb']: mult = 1024 elif unit in ['', 'b']: mult = 1 else: raise ValueError("invalid unit %r in number %r" % (unit, s)) return int(num*mult) def count(l): """Count the number of elements in an iterator. (consumes the iterator)""" return reduce(lambda x,y: x+1, l) saved_errors = [] def add_error(e): """Append an error message to the list of saved errors. Once processing is able to stop and output the errors, the saved errors are accessible in the module variable helpers.saved_errors. """ saved_errors.append(e) log('%-70s\n' % e) def clear_errors(): global saved_errors saved_errors = [] def die_if_errors(msg=None, status=1): global saved_errors if saved_errors: if not msg: msg = 'warning: %d errors encountered\n' % len(saved_errors) log(msg) sys.exit(status) def handle_ctrl_c(): """Replace the default exception handler for KeyboardInterrupt (Ctrl-C). The new exception handler will make sure that bup will exit without an ugly stacktrace when Ctrl-C is hit. """ oldhook = sys.excepthook def newhook(exctype, value, traceback): if exctype == KeyboardInterrupt: log('\nInterrupted.\n') else: return oldhook(exctype, value, traceback) sys.excepthook = newhook def columnate(l, prefix): """Format elements of 'l' in columns with 'prefix' leading each line. The number of columns is determined automatically based on the string lengths. """ if not l: return "" l = l[:] clen = max(len(s) for s in l) ncols = (tty_width() - len(prefix)) / (clen + 2) if ncols <= 1: ncols = 1 clen = 0 cols = [] while len(l) % ncols: l.append('') rows = len(l)/ncols for s in range(0, len(l), rows): cols.append(l[s:s+rows]) out = '' for row in zip(*cols): out += prefix + ''.join(('%-*s' % (clen+2, s)) for s in row) + '\n' return out def parse_date_or_fatal(str, fatal): """Parses the given date or calls Option.fatal(). For now we expect a string that contains a float.""" try: date = float(str) except ValueError as e: raise fatal('invalid date format (should be a float): %r' % e) else: return date def parse_excludes(options, fatal): """Traverse the options and extract all excludes, or call Option.fatal().""" excluded_paths = [] for flag in options: (option, parameter) = flag if option == '--exclude': excluded_paths.append(resolve_parent(parameter)) elif option == '--exclude-from': try: f = open(resolve_parent(parameter)) except IOError as e: raise fatal("couldn't read %s" % parameter) for exclude_path in f.readlines(): # FIXME: perhaps this should be rstrip('\n') exclude_path = resolve_parent(exclude_path.strip()) if exclude_path: excluded_paths.append(exclude_path) return sorted(frozenset(excluded_paths)) def parse_rx_excludes(options, fatal): """Traverse the options and extract all rx excludes, or call Option.fatal().""" excluded_patterns = [] for flag in options: (option, parameter) = flag if option == '--exclude-rx': try: excluded_patterns.append(re.compile(parameter)) except re.error as ex: fatal('invalid --exclude-rx pattern (%s): %s' % (parameter, ex)) elif option == '--exclude-rx-from': try: f = open(resolve_parent(parameter)) except IOError as e: raise fatal("couldn't read %s" % parameter) for pattern in f.readlines(): spattern = pattern.rstrip('\n') if not spattern: continue try: excluded_patterns.append(re.compile(spattern)) except re.error as ex: fatal('invalid --exclude-rx pattern (%s): %s' % (spattern, ex)) return excluded_patterns def should_rx_exclude_path(path, exclude_rxs): """Return True if path matches a regular expression in exclude_rxs.""" for rx in exclude_rxs: if rx.search(path): debug1('Skipping %r: excluded by rx pattern %r.\n' % (path, rx.pattern)) return True return False # FIXME: Carefully consider the use of functions (os.path.*, etc.) # that resolve against the current filesystem in the strip/graft # functions for example, but elsewhere as well. I suspect bup's not # always being careful about that. For some cases, the contents of # the current filesystem should be irrelevant, and consulting it might # produce the wrong result, perhaps via unintended symlink resolution, # for example. def path_components(path): """Break path into a list of pairs of the form (name, full_path_to_name). Path must start with '/'. Example: '/home/foo' -> [('', '/'), ('home', '/home'), ('foo', '/home/foo')]""" if not path.startswith('/'): raise Exception, 'path must start with "/": %s' % path # Since we assume path startswith('/'), we can skip the first element. result = [('', '/')] norm_path = os.path.abspath(path) if norm_path == '/': return result full_path = '' for p in norm_path.split('/')[1:]: full_path += '/' + p result.append((p, full_path)) return result def stripped_path_components(path, strip_prefixes): """Strip any prefix in strip_prefixes from path and return a list of path components where each component is (name, none_or_full_fs_path_to_name). Assume path startswith('/'). See thelpers.py for examples.""" normalized_path = os.path.abspath(path) sorted_strip_prefixes = sorted(strip_prefixes, key=len, reverse=True) for bp in sorted_strip_prefixes: normalized_bp = os.path.abspath(bp) if normalized_bp == '/': continue if normalized_path.startswith(normalized_bp): prefix = normalized_path[:len(normalized_bp)] result = [] for p in normalized_path[len(normalized_bp):].split('/'): if p: # not root prefix += '/' prefix += p result.append((p, prefix)) return result # Nothing to strip. return path_components(path) def grafted_path_components(graft_points, path): # Create a result that consists of some number of faked graft # directories before the graft point, followed by all of the real # directories from path that are after the graft point. Arrange # for the directory at the graft point in the result to correspond # to the "orig" directory in --graft orig=new. See t/thelpers.py # for some examples. # Note that given --graft orig=new, orig and new have *nothing* to # do with each other, even if some of their component names # match. i.e. --graft /foo/bar/baz=/foo/bar/bax is semantically # equivalent to --graft /foo/bar/baz=/x/y/z, or even # /foo/bar/baz=/x. # FIXME: This can't be the best solution... clean_path = os.path.abspath(path) for graft_point in graft_points: old_prefix, new_prefix = graft_point # Expand prefixes iff not absolute paths. old_prefix = os.path.normpath(old_prefix) new_prefix = os.path.normpath(new_prefix) if clean_path.startswith(old_prefix): escaped_prefix = re.escape(old_prefix) grafted_path = re.sub(r'^' + escaped_prefix, new_prefix, clean_path) # Handle /foo=/ (at least) -- which produces //whatever. grafted_path = '/' + grafted_path.lstrip('/') clean_path_components = path_components(clean_path) # Count the components that were stripped. strip_count = 0 if old_prefix == '/' else old_prefix.count('/') new_prefix_parts = new_prefix.split('/') result_prefix = grafted_path.split('/')[:new_prefix.count('/')] result = [(p, None) for p in result_prefix] \ + clean_path_components[strip_count:] # Now set the graft point name to match the end of new_prefix. graft_point = len(result_prefix) result[graft_point] = \ (new_prefix_parts[-1], clean_path_components[strip_count][1]) if new_prefix == '/': # --graft ...=/ is a special case. return result[1:] return result return path_components(clean_path) Sha1 = hashlib.sha1 _localtime = getattr(_helpers, 'localtime', None) if _localtime: bup_time = namedtuple('bup_time', ['tm_year', 'tm_mon', 'tm_mday', 'tm_hour', 'tm_min', 'tm_sec', 'tm_wday', 'tm_yday', 'tm_isdst', 'tm_gmtoff', 'tm_zone']) # Define a localtime() that returns bup_time when possible. Note: # this means that any helpers.localtime() results may need to be # passed through to_py_time() before being passed to python's time # module, which doesn't appear willing to ignore the extra items. if _localtime: def localtime(time): return bup_time(*_helpers.localtime(time)) def utc_offset_str(t): """Return the local offset from UTC as "+hhmm" or "-hhmm" for time t. If the current UTC offset does not represent an integer number of minutes, the fractional component will be truncated.""" off = localtime(t).tm_gmtoff # Note: // doesn't truncate like C for negative values, it rounds down. offmin = abs(off) // 60 m = offmin % 60 h = (offmin - m) // 60 return "%+03d%02d" % (-h if off < 0 else h, m) def to_py_time(x): if isinstance(x, time.struct_time): return x return time.struct_time(x[:9]) else: localtime = time.localtime def utc_offset_str(t): return time.strftime('%z', localtime(t)) def to_py_time(x): return x _some_invalid_save_parts_rx = re.compile(r'[[ ~^:?*\\]|\.\.|//|@{') def valid_save_name(name): # Enforce a superset of the restrictions in git-check-ref-format(1) if name == '@' \ or name.startswith('/') or name.endswith('/') \ or name.endswith('.'): return False if _some_invalid_save_parts_rx.search(name): return False for c in name: if ord(c) < 0x20 or ord(c) == 0x7f: return False for part in name.split('/'): if part.startswith('.') or part.endswith('.lock'): return False return True _period_rx = re.compile(r'^([0-9]+)(s|min|h|d|w|m|y)$') def period_as_secs(s): if s == 'forever': return float('inf') match = _period_rx.match(s) if not match: return None mag = int(match.group(1)) scale = match.group(2) return mag * {'s': 1, 'min': 60, 'h': 60 * 60, 'd': 60 * 60 * 24, 'w': 60 * 60 * 24 * 7, 'm': 60 * 60 * 24 * 31, 'y': 60 * 60 * 24 * 366}[scale] bup-0.29/lib/bup/hlinkdb.py000066400000000000000000000072301303127641400155560ustar00rootroot00000000000000import cPickle, errno, os, tempfile class Error(Exception): pass class HLinkDB: def __init__(self, filename): # Map a "dev:ino" node to a list of paths associated with that node. self._node_paths = {} # Map a path to a "dev:ino" node. self._path_node = {} self._filename = filename self._save_prepared = None self._tmpname = None f = None try: f = open(filename, 'r') except IOError as e: if e.errno == errno.ENOENT: pass else: raise if f: try: self._node_paths = cPickle.load(f) finally: f.close() f = None # Set up the reverse hard link index. for node, paths in self._node_paths.iteritems(): for path in paths: self._path_node[path] = node def prepare_save(self): """ Commit all of the relevant data to disk. Do as much work as possible without actually making the changes visible.""" if self._save_prepared: raise Error('save of %r already in progress' % self._filename) if self._node_paths: (dir, name) = os.path.split(self._filename) (ffd, self._tmpname) = tempfile.mkstemp('.tmp', name, dir) try: try: f = os.fdopen(ffd, 'wb', 65536) except: os.close(ffd) raise try: cPickle.dump(self._node_paths, f, 2) finally: f.close() f = None except: tmpname = self._tmpname self._tmpname = None os.unlink(tmpname) raise self._save_prepared = True def commit_save(self): if not self._save_prepared: raise Error('cannot commit save of %r; no save prepared' % self._filename) if self._tmpname: os.rename(self._tmpname, self._filename) self._tmpname = None else: # No data -- delete _filename if it exists. try: os.unlink(self._filename) except OSError as e: if e.errno == errno.ENOENT: pass else: raise self._save_prepared = None def abort_save(self): if self._tmpname: os.unlink(self._tmpname) self._tmpname = None def __del__(self): self.abort_save() def add_path(self, path, dev, ino): # Assume path is new. node = '%s:%s' % (dev, ino) self._path_node[path] = node link_paths = self._node_paths.get(node) if link_paths and path not in link_paths: link_paths.append(path) else: self._node_paths[node] = [path] def _del_node_path(self, node, path): link_paths = self._node_paths[node] link_paths.remove(path) if not link_paths: del self._node_paths[node] def change_path(self, path, new_dev, new_ino): prev_node = self._path_node.get(path) if prev_node: self._del_node_path(prev_node, path) self.add_path(new_dev, new_ino, path) def del_path(self, path): # Path may not be in db (if updating a pre-hardlink support index). node = self._path_node.get(path) if node: self._del_node_path(node, path) del self._path_node[path] def node_paths(self, dev, ino): node = '%s:%s' % (dev, ino) return self._node_paths[node] bup-0.29/lib/bup/index.py000066400000000000000000000504121303127641400152520ustar00rootroot00000000000000import errno, metadata, os, stat, struct, tempfile from bup import xstat from bup._helpers import UINT_MAX from bup.helpers import (add_error, log, merge_iter, mmap_readwrite, progress, qprogress, resolve_parent, slashappend) EMPTY_SHA = '\0'*20 FAKE_SHA = '\x01'*20 INDEX_HDR = 'BUPI\0\0\0\7' # Time values are handled as integer nanoseconds since the epoch in # memory, but are written as xstat/metadata timespecs. This behavior # matches the existing metadata/xstat/.bupm code. # Record times (mtime, ctime, atime) as xstat/metadata timespecs, and # store all of the times in the index so they won't interfere with the # forthcoming metadata cache. INDEX_SIG = ('!' 'Q' # dev 'Q' # ino 'Q' # nlink 'qQ' # ctime_s, ctime_ns 'qQ' # mtime_s, mtime_ns 'qQ' # atime_s, atime_ns 'Q' # size 'I' # mode 'I' # gitmode '20s' # sha 'H' # flags 'Q' # children_ofs 'I' # children_n 'Q') # meta_ofs ENTLEN = struct.calcsize(INDEX_SIG) FOOTER_SIG = '!Q' FOOTLEN = struct.calcsize(FOOTER_SIG) IX_EXISTS = 0x8000 # file exists on filesystem IX_HASHVALID = 0x4000 # the stored sha1 matches the filesystem IX_SHAMISSING = 0x2000 # the stored sha1 object doesn't seem to exist class Error(Exception): pass class MetaStoreReader: def __init__(self, filename): self._file = None self._file = open(filename, 'rb') def close(self): if self._file: self._file.close() self._file = None def __del__(self): self.close() def metadata_at(self, ofs): self._file.seek(ofs) return metadata.Metadata.read(self._file) class MetaStoreWriter: # For now, we just append to the file, and try to handle any # truncation or corruption somewhat sensibly. def __init__(self, filename): # Map metadata hashes to bupindex.meta offsets. self._offsets = {} self._filename = filename self._file = None # FIXME: see how slow this is; does it matter? m_file = open(filename, 'ab+') try: m_file.seek(0) try: m_off = m_file.tell() m = metadata.Metadata.read(m_file) while m: m_encoded = m.encode() self._offsets[m_encoded] = m_off m_off = m_file.tell() m = metadata.Metadata.read(m_file) except EOFError: pass except: log('index metadata in %r appears to be corrupt' % filename) raise finally: m_file.close() self._file = open(filename, 'ab') def close(self): if self._file: self._file.close() self._file = None def __del__(self): # Be optimistic. self.close() def store(self, metadata): meta_encoded = metadata.encode(include_path=False) ofs = self._offsets.get(meta_encoded) if ofs: return ofs ofs = self._file.tell() self._file.write(meta_encoded) self._offsets[meta_encoded] = ofs return ofs class Level: def __init__(self, ename, parent): self.parent = parent self.ename = ename self.list = [] self.count = 0 def write(self, f): (ofs,n) = (f.tell(), len(self.list)) if self.list: count = len(self.list) #log('popping %r with %d entries\n' # % (''.join(self.ename), count)) for e in self.list: e.write(f) if self.parent: self.parent.count += count + self.count return (ofs,n) def _golevel(level, f, ename, newentry, metastore, tmax): # close nodes back up the tree assert(level) default_meta_ofs = metastore.store(metadata.Metadata()) while ename[:len(level.ename)] != level.ename: n = BlankNewEntry(level.ename[-1], default_meta_ofs, tmax) n.flags |= IX_EXISTS (n.children_ofs,n.children_n) = level.write(f) level.parent.list.append(n) level = level.parent # create nodes down the tree while len(level.ename) < len(ename): level = Level(ename[:len(level.ename)+1], level) # are we in precisely the right place? assert(ename == level.ename) n = newentry or \ BlankNewEntry(ename and level.ename[-1] or None, default_meta_ofs, tmax) (n.children_ofs,n.children_n) = level.write(f) if level.parent: level.parent.list.append(n) level = level.parent return level class Entry: def __init__(self, basename, name, meta_ofs, tmax): self.basename = str(basename) self.name = str(name) self.meta_ofs = meta_ofs self.tmax = tmax self.children_ofs = 0 self.children_n = 0 def __repr__(self): return ("(%s,0x%04x,%d,%d,%d,%d,%d,%d,%s/%s,0x%04x,%d,0x%08x/%d)" % (self.name, self.dev, self.ino, self.nlink, self.ctime, self.mtime, self.atime, self.size, self.mode, self.gitmode, self.flags, self.meta_ofs, self.children_ofs, self.children_n)) def packed(self): try: ctime = xstat.nsecs_to_timespec(self.ctime) mtime = xstat.nsecs_to_timespec(self.mtime) atime = xstat.nsecs_to_timespec(self.atime) return struct.pack(INDEX_SIG, self.dev, self.ino, self.nlink, ctime[0], ctime[1], mtime[0], mtime[1], atime[0], atime[1], self.size, self.mode, self.gitmode, self.sha, self.flags, self.children_ofs, self.children_n, self.meta_ofs) except (DeprecationWarning, struct.error) as e: log('pack error: %s (%r)\n' % (e, self)) raise def stale(self, st, tstart, check_device=True): if self.size != st.st_size: return True if self.mtime != st.st_mtime: return True if self.sha == EMPTY_SHA: return True if not self.gitmode: return True if self.ctime != st.st_ctime: return True if self.ino != st.st_ino: return True if self.nlink != st.st_nlink: return True if not (self.flags & IX_EXISTS): return True if check_device and (self.dev != st.st_dev): return True # Check that the ctime's "second" is at or after tstart's. ctime_sec_in_ns = xstat.fstime_floor_secs(st.st_ctime) * 10**9 if ctime_sec_in_ns >= tstart: return True return False def update_from_stat(self, st, meta_ofs): # Should only be called when the entry is stale(), and # invalidate() should almost certainly be called afterward. self.dev = st.st_dev self.ino = st.st_ino self.nlink = st.st_nlink self.ctime = st.st_ctime self.mtime = st.st_mtime self.atime = st.st_atime self.size = st.st_size self.mode = st.st_mode self.flags |= IX_EXISTS self.meta_ofs = meta_ofs self._fixup() def _fixup(self): self.mtime = self._fixup_time(self.mtime) self.ctime = self._fixup_time(self.ctime) def _fixup_time(self, t): if self.tmax != None and t > self.tmax: return self.tmax else: return t def is_valid(self): f = IX_HASHVALID|IX_EXISTS return (self.flags & f) == f def invalidate(self): self.flags &= ~IX_HASHVALID def validate(self, gitmode, sha): assert(sha) assert(gitmode) assert(gitmode+0 == gitmode) self.gitmode = gitmode self.sha = sha self.flags |= IX_HASHVALID|IX_EXISTS def exists(self): return not self.is_deleted() def sha_missing(self): return (self.flags & IX_SHAMISSING) or not (self.flags & IX_HASHVALID) def is_deleted(self): return (self.flags & IX_EXISTS) == 0 def set_deleted(self): if self.flags & IX_EXISTS: self.flags &= ~(IX_EXISTS | IX_HASHVALID) def is_real(self): return not self.is_fake() def is_fake(self): return not self.ctime def __cmp__(a, b): return (cmp(b.name, a.name) or cmp(a.is_valid(), b.is_valid()) or cmp(a.is_fake(), b.is_fake())) def write(self, f): f.write(self.basename + '\0' + self.packed()) class NewEntry(Entry): def __init__(self, basename, name, tmax, dev, ino, nlink, ctime, mtime, atime, size, mode, gitmode, sha, flags, meta_ofs, children_ofs, children_n): Entry.__init__(self, basename, name, meta_ofs, tmax) (self.dev, self.ino, self.nlink, self.ctime, self.mtime, self.atime, self.size, self.mode, self.gitmode, self.sha, self.flags, self.children_ofs, self.children_n ) = (dev, ino, nlink, ctime, mtime, atime, size, mode, gitmode, sha, flags, children_ofs, children_n) self._fixup() class BlankNewEntry(NewEntry): def __init__(self, basename, meta_ofs, tmax): NewEntry.__init__(self, basename, basename, tmax, 0, 0, 0, 0, 0, 0, 0, 0, 0, EMPTY_SHA, 0, meta_ofs, 0, 0) class ExistingEntry(Entry): def __init__(self, parent, basename, name, m, ofs): Entry.__init__(self, basename, name, None, None) self.parent = parent self._m = m self._ofs = ofs (self.dev, self.ino, self.nlink, self.ctime, ctime_ns, self.mtime, mtime_ns, self.atime, atime_ns, self.size, self.mode, self.gitmode, self.sha, self.flags, self.children_ofs, self.children_n, self.meta_ofs ) = struct.unpack(INDEX_SIG, str(buffer(m, ofs, ENTLEN))) self.atime = xstat.timespec_to_nsecs((self.atime, atime_ns)) self.mtime = xstat.timespec_to_nsecs((self.mtime, mtime_ns)) self.ctime = xstat.timespec_to_nsecs((self.ctime, ctime_ns)) # effectively, we don't bother messing with IX_SHAMISSING if # not IX_HASHVALID, since it's redundant, and repacking is more # expensive than not repacking. # This is implemented by having sha_missing() check IX_HASHVALID too. def set_sha_missing(self, val): val = val and 1 or 0 oldval = self.sha_missing() and 1 or 0 if val != oldval: flag = val and IX_SHAMISSING or 0 newflags = (self.flags & (~IX_SHAMISSING)) | flag self.flags = newflags self.repack() def unset_sha_missing(self, flag): if self.flags & IX_SHAMISSING: self.flags &= ~IX_SHAMISSING self.repack() def repack(self): self._m[self._ofs:self._ofs+ENTLEN] = self.packed() if self.parent and not self.is_valid(): self.parent.invalidate() self.parent.repack() def iter(self, name=None, wantrecurse=None): dname = name if dname and not dname.endswith('/'): dname += '/' ofs = self.children_ofs assert(ofs <= len(self._m)) assert(self.children_n <= UINT_MAX) # i.e. python struct 'I' for i in xrange(self.children_n): eon = self._m.find('\0', ofs) assert(eon >= 0) assert(eon >= ofs) assert(eon > ofs) basename = str(buffer(self._m, ofs, eon-ofs)) child = ExistingEntry(self, basename, self.name + basename, self._m, eon+1) if (not dname or child.name.startswith(dname) or child.name.endswith('/') and dname.startswith(child.name)): if not wantrecurse or wantrecurse(child): for e in child.iter(name=name, wantrecurse=wantrecurse): yield e if not name or child.name == name or child.name.startswith(dname): yield child ofs = eon + 1 + ENTLEN def __iter__(self): return self.iter() class Reader: def __init__(self, filename): self.filename = filename self.m = '' self.writable = False self.count = 0 f = None try: f = open(filename, 'r+') except IOError as e: if e.errno == errno.ENOENT: pass else: raise if f: b = f.read(len(INDEX_HDR)) if b != INDEX_HDR: log('warning: %s: header: expected %r, got %r\n' % (filename, INDEX_HDR, b)) else: st = os.fstat(f.fileno()) if st.st_size: self.m = mmap_readwrite(f) self.writable = True self.count = struct.unpack(FOOTER_SIG, str(buffer(self.m, st.st_size-FOOTLEN, FOOTLEN)))[0] def __del__(self): self.close() def __len__(self): return int(self.count) def forward_iter(self): ofs = len(INDEX_HDR) while ofs+ENTLEN <= len(self.m)-FOOTLEN: eon = self.m.find('\0', ofs) assert(eon >= 0) assert(eon >= ofs) assert(eon > ofs) basename = str(buffer(self.m, ofs, eon-ofs)) yield ExistingEntry(None, basename, basename, self.m, eon+1) ofs = eon + 1 + ENTLEN def iter(self, name=None, wantrecurse=None): if len(self.m) > len(INDEX_HDR)+ENTLEN: dname = name if dname and not dname.endswith('/'): dname += '/' root = ExistingEntry(None, '/', '/', self.m, len(self.m)-FOOTLEN-ENTLEN) for sub in root.iter(name=name, wantrecurse=wantrecurse): yield sub if not dname or dname == root.name: yield root def __iter__(self): return self.iter() def find(self, name): return next((e for e in self.iter(name, wantrecurse=lambda x : True) if e.name == name), None) def exists(self): return self.m def save(self): if self.writable and self.m: self.m.flush() def close(self): self.save() if self.writable and self.m: self.m.close() self.m = None self.writable = False def filter(self, prefixes, wantrecurse=None): for (rp, path) in reduce_paths(prefixes): any_entries = False for e in self.iter(rp, wantrecurse=wantrecurse): any_entries = True assert(e.name.startswith(rp)) name = path + e.name[len(rp):] yield (name, e) if not any_entries: # Always return at least the top for each prefix. # Otherwise something like "save x/y" will produce # nothing if x is up to date. pe = self.find(rp) assert(pe) name = path + pe.name[len(rp):] yield (name, pe) # FIXME: this function isn't very generic, because it splits the filename # in an odd way and depends on a terminating '/' to indicate directories. def pathsplit(p): """Split a path into a list of elements of the file system hierarchy.""" l = p.split('/') l = [i+'/' for i in l[:-1]] + l[-1:] if l[-1] == '': l.pop() # extra blank caused by terminating '/' return l class Writer: def __init__(self, filename, metastore, tmax): self.rootlevel = self.level = Level([], None) self.f = None self.count = 0 self.lastfile = None self.filename = None self.filename = filename = resolve_parent(filename) self.metastore = metastore self.tmax = tmax (dir,name) = os.path.split(filename) (ffd,self.tmpname) = tempfile.mkstemp('.tmp', filename, dir) self.f = os.fdopen(ffd, 'wb', 65536) self.f.write(INDEX_HDR) def __del__(self): self.abort() def abort(self): f = self.f self.f = None if f: f.close() os.unlink(self.tmpname) def flush(self): if self.level: self.level = _golevel(self.level, self.f, [], None, self.metastore, self.tmax) self.count = self.rootlevel.count if self.count: self.count += 1 self.f.write(struct.pack(FOOTER_SIG, self.count)) self.f.flush() assert(self.level == None) def close(self): self.flush() f = self.f self.f = None if f: f.close() os.rename(self.tmpname, self.filename) def _add(self, ename, entry): if self.lastfile and self.lastfile <= ename: raise Error('%r must come before %r' % (''.join(ename), ''.join(self.lastfile))) self.lastfile = ename self.level = _golevel(self.level, self.f, ename, entry, self.metastore, self.tmax) def add(self, name, st, meta_ofs, hashgen = None): endswith = name.endswith('/') ename = pathsplit(name) basename = ename[-1] #log('add: %r %r\n' % (basename, name)) flags = IX_EXISTS sha = None if hashgen: (gitmode, sha) = hashgen(name) flags |= IX_HASHVALID else: (gitmode, sha) = (0, EMPTY_SHA) if st: isdir = stat.S_ISDIR(st.st_mode) assert(isdir == endswith) e = NewEntry(basename, name, self.tmax, st.st_dev, st.st_ino, st.st_nlink, st.st_ctime, st.st_mtime, st.st_atime, st.st_size, st.st_mode, gitmode, sha, flags, meta_ofs, 0, 0) else: assert(endswith) meta_ofs = self.metastore.store(metadata.Metadata()) e = BlankNewEntry(basename, meta_ofs, self.tmax) e.gitmode = gitmode e.sha = sha e.flags = flags self._add(ename, e) def add_ixentry(self, e): e.children_ofs = e.children_n = 0 self._add(pathsplit(e.name), e) def new_reader(self): self.flush() return Reader(self.tmpname) def _slashappend_or_add_error(p, caller): """Return p, after ensuring it has a single trailing slash if it names a directory, unless there's an OSError, in which case, call add_error() and return None.""" try: st = os.lstat(p) except OSError as e: add_error('%s: %s' % (caller, e)) return None else: if stat.S_ISDIR(st.st_mode): return slashappend(p) return p def unique_resolved_paths(paths): "Return a collection of unique resolved paths." rps = (_slashappend_or_add_error(resolve_parent(p), 'unique_resolved_paths') for p in paths) return frozenset((x for x in rps if x is not None)) def reduce_paths(paths): xpaths = [] for p in paths: rp = _slashappend_or_add_error(resolve_parent(p), 'reduce_paths') if rp: xpaths.append((rp, slashappend(p) if rp.endswith('/') else p)) xpaths.sort() paths = [] prev = None for (rp, p) in xpaths: if prev and (prev == rp or (prev.endswith('/') and rp.startswith(prev))): continue # already superceded by previous path paths.append((rp, p)) prev = rp paths.sort(reverse=True) return paths def merge(*iters): def pfunc(count, total): qprogress('bup: merging indexes (%d/%d)\r' % (count, total)) def pfinal(count, total): progress('bup: merging indexes (%d/%d), done.\n' % (count, total)) return merge_iter(iters, 1024, pfunc, pfinal, key='name') bup-0.29/lib/bup/ls.py000066400000000000000000000110051303127641400145540ustar00rootroot00000000000000"""Common code for listing files from a bup repository.""" import copy, os.path, stat, sys, xstat from bup import metadata, options, vfs from helpers import columnate, istty1, log def node_info(n, name, show_hash = False, long_fmt = False, classification = None, numeric_ids = False, human_readable = False): """Return a string containing the information to display for the node n. Classification may be "all", "type", or None.""" result = '' if show_hash: result += "%s " % n.hash.encode('hex') if long_fmt: meta = copy.copy(n.metadata()) if meta: meta.path = name meta.size = n.size() else: # Fake it -- summary_str() is designed to handle a fake. meta = metadata.Metadata() meta.size = n.size() meta.mode = n.mode meta.path = name meta.atime, meta.mtime, meta.ctime = n.atime, n.mtime, n.ctime if stat.S_ISLNK(meta.mode): meta.symlink_target = n.readlink() result += metadata.summary_str(meta, numeric_ids = numeric_ids, classification = classification, human_readable = human_readable) else: result += name if classification: mode = n.metadata() and n.metadata().mode or n.mode result += xstat.classification_str(mode, classification == 'all') return result optspec = """ %sls [-a] [path...] -- s,hash show hash for each file a,all show hidden files A,almost-all show hidden files except . and .. l use a detailed, long listing format d,directory show directories, not contents; don't follow symlinks F,classify append type indicator: dir/ sym@ fifo| sock= exec* file-type append type indicator: dir/ sym@ fifo| sock= human-readable print human readable file sizes (i.e. 3.9K, 4.7M) n,numeric-ids list numeric IDs (user, group, etc.) rather than names """ def do_ls(args, pwd, default='.', onabort=None, spec_prefix=''): """Output a listing of a file or directory in the bup repository. When a long listing is not requested and stdout is attached to a tty, the output is formatted in columns. When not attached to tty (for example when the output is piped to another command), one file is listed per line. """ if onabort: o = options.Options(optspec % spec_prefix, onabort=onabort) else: o = options.Options(optspec % spec_prefix) (opt, flags, extra) = o.parse(args) # Handle order-sensitive options. classification = None show_hidden = None for flag in flags: (option, parameter) = flag if option in ('-F', '--classify'): classification = 'all' elif option == '--file-type': classification = 'type' elif option in ('-a', '--all'): show_hidden = 'all' elif option in ('-A', '--almost-all'): show_hidden = 'almost' L = [] def output_node_info(node, name): info = node_info(node, name, show_hash = opt.hash, long_fmt = opt.l, classification = classification, numeric_ids = opt.numeric_ids, human_readable = opt.human_readable) if not opt.l and istty1: L.append(info) else: print info ret = 0 for path in (extra or [default]): try: if opt.directory: n = pwd.lresolve(path) else: n = pwd.try_resolve(path) if not opt.directory and stat.S_ISDIR(n.mode): if show_hidden == 'all': output_node_info(n, '.') # Match non-bup "ls -a ... /". if n.parent: output_node_info(n.parent, '..') else: output_node_info(n, '..') for sub in n: name = sub.name if show_hidden in ('almost', 'all') \ or not len(name)>1 or not name.startswith('.'): output_node_info(sub, name) else: output_node_info(n, os.path.normpath(path)) except vfs.NodeError as e: log('error: %s\n' % e) ret = 1 if L: sys.stdout.write(columnate(L, '')) return ret bup-0.29/lib/bup/metadata.py000066400000000000000000001216261303127641400157310ustar00rootroot00000000000000"""Metadata read/write support for bup.""" # Copyright (C) 2010 Rob Browning # # This code is covered under the terms of the GNU Library General # Public License as described in the bup LICENSE file. from errno import EACCES, EINVAL, ENOTTY, ENOSYS, EOPNOTSUPP from io import BytesIO import errno, os, sys, stat, time, pwd, grp, socket, struct from bup import vint, xstat from bup.drecurse import recursive_dirlist from bup.helpers import add_error, mkdirp, log, is_superuser, format_filesize from bup.helpers import pwd_from_uid, pwd_from_name, grp_from_gid, grp_from_name from bup.xstat import utime, lutime xattr = None if sys.platform.startswith('linux'): try: import xattr except ImportError: log('Warning: Linux xattr support missing; install python-pyxattr.\n') if xattr: try: xattr.get_all except AttributeError: log('Warning: python-xattr module is too old; ' 'install python-pyxattr instead.\n') xattr = None posix1e = None if not (sys.platform.startswith('cygwin') \ or sys.platform.startswith('darwin') \ or sys.platform.startswith('netbsd')): try: import posix1e except ImportError: log('Warning: POSIX ACL support missing; install python-pylibacl.\n') try: from bup._helpers import get_linux_file_attr, set_linux_file_attr except ImportError: # No need for a warning here; the only reason they won't exist is that we're # not on Linux, in which case files don't have any linux attrs anyway, so # lacking the functions isn't a problem. get_linux_file_attr = set_linux_file_attr = None # See the bup_get_linux_file_attr() comments. _suppress_linux_file_attr = \ sys.byteorder == 'big' and struct.calcsize('@l') > struct.calcsize('@i') def check_linux_file_attr_api(): global get_linux_file_attr, set_linux_file_attr if not (get_linux_file_attr or set_linux_file_attr): return if _suppress_linux_file_attr: log('Warning: Linux attr support disabled (see "bup help index").\n') get_linux_file_attr = set_linux_file_attr = None # WARNING: the metadata encoding is *not* stable yet. Caveat emptor! # Q: Consider hardlink support? # Q: Is it OK to store raw linux attr (chattr) flags? # Q: Can anything other than S_ISREG(x) or S_ISDIR(x) support posix1e ACLs? # Q: Is the application of posix1e has_extended() correct? # Q: Is one global --numeric-ids argument sufficient? # Q: Do nfsv4 acls trump posix1e acls? (seems likely) # Q: Add support for crtime -- ntfs, and (only internally?) ext*? # FIXME: Fix relative/abs path detection/stripping wrt other platforms. # FIXME: Add nfsv4 acl handling - see nfs4-acl-tools. # FIXME: Consider other entries mentioned in stat(2) (S_IFDOOR, etc.). # FIXME: Consider pack('vvvvsss', ...) optimization. ## FS notes: # # osx (varies between hfs and hfs+): # type - regular dir char block fifo socket ... # perms - rwxrwxrwxsgt # times - ctime atime mtime # uid # gid # hard-link-info (hfs+ only) # link-target # device-major/minor # attributes-osx see chflags # content-type # content-creator # forks # # ntfs # type - regular dir ... # times - creation, modification, posix change, access # hard-link-info # link-target # attributes - see attrib # ACLs # forks (alternate data streams) # crtime? # # fat # type - regular dir ... # perms - rwxrwxrwx (maybe - see wikipedia) # times - creation, modification, access # attributes - see attrib verbose = 0 _have_lchmod = hasattr(os, 'lchmod') def _clean_up_path_for_archive(p): # Not the most efficient approach. result = p # Take everything after any '/../'. pos = result.rfind('/../') if pos != -1: result = result[result.rfind('/../') + 4:] # Take everything after any remaining '../'. if result.startswith("../"): result = result[3:] # Remove any '/./' sequences. pos = result.find('/./') while pos != -1: result = result[0:pos] + '/' + result[pos + 3:] pos = result.find('/./') # Remove any leading '/'s. result = result.lstrip('/') # Replace '//' with '/' everywhere. pos = result.find('//') while pos != -1: result = result[0:pos] + '/' + result[pos + 2:] pos = result.find('//') # Take everything after any remaining './'. if result.startswith('./'): result = result[2:] # Take everything before any remaining '/.'. if result.endswith('/.'): result = result[:-2] if result == '' or result.endswith('/..'): result = '.' return result def _risky_path(p): if p.startswith('/'): return True if p.find('/../') != -1: return True if p.startswith('../'): return True if p.endswith('/..'): return True return False def _clean_up_extract_path(p): result = p.lstrip('/') if result == '': return '.' elif _risky_path(result): return None else: return result # These tags are currently conceptually private to Metadata, and they # must be unique, and must *never* be changed. _rec_tag_end = 0 _rec_tag_path = 1 _rec_tag_common = 2 # times, user, group, type, perms, etc. (legacy/broken) _rec_tag_symlink_target = 3 _rec_tag_posix1e_acl = 4 # getfacl(1), setfacl(1), etc. _rec_tag_nfsv4_acl = 5 # intended to supplant posix1e? (unimplemented) _rec_tag_linux_attr = 6 # lsattr(1) chattr(1) _rec_tag_linux_xattr = 7 # getfattr(1) setfattr(1) _rec_tag_hardlink_target = 8 # hard link target path _rec_tag_common_v2 = 9 # times, user, group, type, perms, etc. (current) _warned_about_attr_einval = None class ApplyError(Exception): # Thrown when unable to apply any given bit of metadata to a path. pass class Metadata: # Metadata is stored as a sequence of tagged binary records. Each # record will have some subset of add, encode, load, create, and # apply methods, i.e. _add_foo... # We do allow an "empty" object as a special case, i.e. no # records. One can be created by trying to write Metadata(), and # for such an object, read() will return None. This is used by # "bup save", for example, as a placeholder in cases where # from_path() fails. # NOTE: if any relevant fields are added or removed, be sure to # update same_file() below. ## Common records # Timestamps are (sec, ns), relative to 1970-01-01 00:00:00, ns # must be non-negative and < 10**9. def _add_common(self, path, st): assert(st.st_uid >= 0) assert(st.st_gid >= 0) self.uid = st.st_uid self.gid = st.st_gid self.atime = st.st_atime self.mtime = st.st_mtime self.ctime = st.st_ctime self.user = self.group = '' entry = pwd_from_uid(st.st_uid) if entry: self.user = entry.pw_name entry = grp_from_gid(st.st_gid) if entry: self.group = entry.gr_name self.mode = st.st_mode # Only collect st_rdev if we might need it for a mknod() # during restore. On some platforms (i.e. kFreeBSD), it isn't # stable for other file types. For example "cp -a" will # change it for a plain file. if stat.S_ISCHR(st.st_mode) or stat.S_ISBLK(st.st_mode): self.rdev = st.st_rdev else: self.rdev = 0 def _same_common(self, other): """Return true or false to indicate similarity in the hardlink sense.""" return self.uid == other.uid \ and self.gid == other.gid \ and self.rdev == other.rdev \ and self.mtime == other.mtime \ and self.ctime == other.ctime \ and self.user == other.user \ and self.group == other.group def _encode_common(self): if not self.mode: return None atime = xstat.nsecs_to_timespec(self.atime) mtime = xstat.nsecs_to_timespec(self.mtime) ctime = xstat.nsecs_to_timespec(self.ctime) result = vint.pack('vvsvsvvVvVvV', self.mode, self.uid, self.user, self.gid, self.group, self.rdev, atime[0], atime[1], mtime[0], mtime[1], ctime[0], ctime[1]) return result def _load_common_rec(self, port, legacy_format=False): unpack_fmt = 'vvsvsvvVvVvV' if legacy_format: unpack_fmt = 'VVsVsVvVvVvV' data = vint.read_bvec(port) (self.mode, self.uid, self.user, self.gid, self.group, self.rdev, self.atime, atime_ns, self.mtime, mtime_ns, self.ctime, ctime_ns) = vint.unpack(unpack_fmt, data) self.atime = xstat.timespec_to_nsecs((self.atime, atime_ns)) self.mtime = xstat.timespec_to_nsecs((self.mtime, mtime_ns)) self.ctime = xstat.timespec_to_nsecs((self.ctime, ctime_ns)) def _recognized_file_type(self): return stat.S_ISREG(self.mode) \ or stat.S_ISDIR(self.mode) \ or stat.S_ISCHR(self.mode) \ or stat.S_ISBLK(self.mode) \ or stat.S_ISFIFO(self.mode) \ or stat.S_ISSOCK(self.mode) \ or stat.S_ISLNK(self.mode) def _create_via_common_rec(self, path, create_symlinks=True): if not self.mode: raise ApplyError('no metadata - cannot create path ' + path) # If the path already exists and is a dir, try rmdir. # If the path already exists and is anything else, try unlink. st = None try: st = xstat.lstat(path) except OSError as e: if e.errno != errno.ENOENT: raise if st: if stat.S_ISDIR(st.st_mode): try: os.rmdir(path) except OSError as e: if e.errno in (errno.ENOTEMPTY, errno.EEXIST): msg = 'refusing to overwrite non-empty dir ' + path raise Exception(msg) raise else: os.unlink(path) if stat.S_ISREG(self.mode): assert(self._recognized_file_type()) fd = os.open(path, os.O_CREAT|os.O_WRONLY|os.O_EXCL, 0o600) os.close(fd) elif stat.S_ISDIR(self.mode): assert(self._recognized_file_type()) os.mkdir(path, 0o700) elif stat.S_ISCHR(self.mode): assert(self._recognized_file_type()) os.mknod(path, 0o600 | stat.S_IFCHR, self.rdev) elif stat.S_ISBLK(self.mode): assert(self._recognized_file_type()) os.mknod(path, 0o600 | stat.S_IFBLK, self.rdev) elif stat.S_ISFIFO(self.mode): assert(self._recognized_file_type()) os.mknod(path, 0o600 | stat.S_IFIFO) elif stat.S_ISSOCK(self.mode): try: os.mknod(path, 0o600 | stat.S_IFSOCK) except OSError as e: if e.errno in (errno.EINVAL, errno.EPERM): s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) s.bind(path) else: raise elif stat.S_ISLNK(self.mode): assert(self._recognized_file_type()) if self.symlink_target and create_symlinks: # on MacOS, symlink() permissions depend on umask, and there's # no way to chown a symlink after creating it, so we have to # be careful here! oldumask = os.umask((self.mode & 0o777) ^ 0o777) try: os.symlink(self.symlink_target, path) finally: os.umask(oldumask) # FIXME: S_ISDOOR, S_IFMPB, S_IFCMP, S_IFNWK, ... see stat(2). else: assert(not self._recognized_file_type()) add_error('not creating "%s" with unrecognized mode "0x%x"\n' % (path, self.mode)) def _apply_common_rec(self, path, restore_numeric_ids=False): if not self.mode: raise ApplyError('no metadata - cannot apply to ' + path) # FIXME: S_ISDOOR, S_IFMPB, S_IFCMP, S_IFNWK, ... see stat(2). # EACCES errors at this stage are fatal for the current path. if lutime and stat.S_ISLNK(self.mode): try: lutime(path, (self.atime, self.mtime)) except OSError as e: if e.errno == errno.EACCES: raise ApplyError('lutime: %s' % e) else: raise else: try: utime(path, (self.atime, self.mtime)) except OSError as e: if e.errno == errno.EACCES: raise ApplyError('utime: %s' % e) else: raise uid = gid = -1 # By default, do nothing. if is_superuser(): uid = self.uid gid = self.gid if not restore_numeric_ids: if self.uid != 0 and self.user: entry = pwd_from_name(self.user) if entry: uid = entry.pw_uid if self.gid != 0 and self.group: entry = grp_from_name(self.group) if entry: gid = entry.gr_gid else: # not superuser - only consider changing the group/gid user_gids = os.getgroups() if self.gid in user_gids: gid = self.gid if not restore_numeric_ids and self.gid != 0: # The grp might not exist on the local system. grps = filter(None, [grp_from_gid(x) for x in user_gids]) if self.group in [x.gr_name for x in grps]: g = grp_from_name(self.group) if g: gid = g.gr_gid if uid != -1 or gid != -1: try: os.lchown(path, uid, gid) except OSError as e: if e.errno == errno.EPERM: add_error('lchown: %s' % e) elif sys.platform.startswith('cygwin') \ and e.errno == errno.EINVAL: add_error('lchown: unknown uid/gid (%d/%d) for %s' % (uid, gid, path)) else: raise if _have_lchmod: try: os.lchmod(path, stat.S_IMODE(self.mode)) except errno.ENOSYS: # Function not implemented pass elif not stat.S_ISLNK(self.mode): os.chmod(path, stat.S_IMODE(self.mode)) ## Path records def _encode_path(self): if self.path: return vint.pack('s', self.path) else: return None def _load_path_rec(self, port): self.path = vint.unpack('s', vint.read_bvec(port))[0] ## Symlink targets def _add_symlink_target(self, path, st): try: if stat.S_ISLNK(st.st_mode): self.symlink_target = os.readlink(path) except OSError as e: add_error('readlink: %s' % e) def _encode_symlink_target(self): return self.symlink_target def _load_symlink_target_rec(self, port): self.symlink_target = vint.read_bvec(port) ## Hardlink targets def _add_hardlink_target(self, target): self.hardlink_target = target def _same_hardlink_target(self, other): """Return true or false to indicate similarity in the hardlink sense.""" return self.hardlink_target == other.hardlink_target def _encode_hardlink_target(self): return self.hardlink_target def _load_hardlink_target_rec(self, port): self.hardlink_target = vint.read_bvec(port) ## POSIX1e ACL records # Recorded as a list: # [txt_id_acl, num_id_acl] # or, if a directory: # [txt_id_acl, num_id_acl, txt_id_default_acl, num_id_default_acl] # The numeric/text distinction only matters when reading/restoring # a stored record. def _add_posix1e_acl(self, path, st): if not posix1e or not posix1e.HAS_EXTENDED_CHECK: return if not stat.S_ISLNK(st.st_mode): acls = None def_acls = None try: if posix1e.has_extended(path): acl = posix1e.ACL(file=path) acls = [acl, acl] # txt and num are the same if stat.S_ISDIR(st.st_mode): def_acl = posix1e.ACL(filedef=path) def_acls = [def_acl, def_acl] except EnvironmentError as e: if e.errno not in (errno.EOPNOTSUPP, errno.ENOSYS): raise if acls: txt_flags = posix1e.TEXT_ABBREVIATE num_flags = posix1e.TEXT_ABBREVIATE | posix1e.TEXT_NUMERIC_IDS acl_rep = [acls[0].to_any_text('', '\n', txt_flags), acls[1].to_any_text('', '\n', num_flags)] if def_acls: acl_rep.append(def_acls[0].to_any_text('', '\n', txt_flags)) acl_rep.append(def_acls[1].to_any_text('', '\n', num_flags)) self.posix1e_acl = acl_rep def _same_posix1e_acl(self, other): """Return true or false to indicate similarity in the hardlink sense.""" return self.posix1e_acl == other.posix1e_acl def _encode_posix1e_acl(self): # Encode as two strings (w/default ACL string possibly empty). if self.posix1e_acl: acls = self.posix1e_acl if len(acls) == 2: acls.extend(['', '']) return vint.pack('ssss', acls[0], acls[1], acls[2], acls[3]) else: return None def _load_posix1e_acl_rec(self, port): acl_rep = vint.unpack('ssss', vint.read_bvec(port)) if acl_rep[2] == '': acl_rep = acl_rep[:2] self.posix1e_acl = acl_rep def _apply_posix1e_acl_rec(self, path, restore_numeric_ids=False): def apply_acl(acl_rep, kind): try: acl = posix1e.ACL(text = acl_rep) except IOError as e: if e.errno == 0: # pylibacl appears to return an IOError with errno # set to 0 if a group referred to by the ACL rep # doesn't exist on the current system. raise ApplyError("POSIX1e ACL: can't create %r for %r" % (acl_rep, path)) else: raise try: acl.applyto(path, kind) except IOError as e: if e.errno == errno.EPERM or e.errno == errno.EOPNOTSUPP: raise ApplyError('POSIX1e ACL applyto: %s' % e) else: raise if not posix1e: if self.posix1e_acl: add_error("%s: can't restore ACLs; posix1e support missing.\n" % path) return if self.posix1e_acl: acls = self.posix1e_acl if len(acls) > 2: if restore_numeric_ids: apply_acl(acls[3], posix1e.ACL_TYPE_DEFAULT) else: apply_acl(acls[2], posix1e.ACL_TYPE_DEFAULT) if restore_numeric_ids: apply_acl(acls[1], posix1e.ACL_TYPE_ACCESS) else: apply_acl(acls[0], posix1e.ACL_TYPE_ACCESS) ## Linux attributes (lsattr(1), chattr(1)) def _add_linux_attr(self, path, st): check_linux_file_attr_api() if not get_linux_file_attr: return if stat.S_ISREG(st.st_mode) or stat.S_ISDIR(st.st_mode): try: attr = get_linux_file_attr(path) if attr != 0: self.linux_attr = attr except OSError as e: if e.errno == errno.EACCES: add_error('read Linux attr: %s' % e) elif e.errno in (ENOTTY, ENOSYS, EOPNOTSUPP): # Assume filesystem doesn't support attrs. return elif e.errno == EINVAL: global _warned_about_attr_einval if not _warned_about_attr_einval: log("Ignoring attr EINVAL;" + " if you're not using ntfs-3g, please report: " + repr(path) + '\n') _warned_about_attr_einval = True return else: raise def _same_linux_attr(self, other): """Return true or false to indicate similarity in the hardlink sense.""" return self.linux_attr == other.linux_attr def _encode_linux_attr(self): if self.linux_attr: return vint.pack('V', self.linux_attr) else: return None def _load_linux_attr_rec(self, port): data = vint.read_bvec(port) self.linux_attr = vint.unpack('V', data)[0] def _apply_linux_attr_rec(self, path, restore_numeric_ids=False): if self.linux_attr: check_linux_file_attr_api() if not set_linux_file_attr: add_error("%s: can't restore linuxattrs: " "linuxattr support missing.\n" % path) return try: set_linux_file_attr(path, self.linux_attr) except OSError as e: if e.errno in (EACCES, ENOTTY, EOPNOTSUPP, ENOSYS): raise ApplyError('Linux chattr: %s (0x%s)' % (e, hex(self.linux_attr))) elif e.errno == EINVAL: msg = "if you're not using ntfs-3g, please report" raise ApplyError('Linux chattr: %s (0x%s) (%s)' % (e, hex(self.linux_attr), msg)) else: raise ## Linux extended attributes (getfattr(1), setfattr(1)) def _add_linux_xattr(self, path, st): if not xattr: return try: self.linux_xattr = xattr.get_all(path, nofollow=True) except EnvironmentError as e: if e.errno != errno.EOPNOTSUPP: raise def _same_linux_xattr(self, other): """Return true or false to indicate similarity in the hardlink sense.""" return self.linux_xattr == other.linux_xattr def _encode_linux_xattr(self): if self.linux_xattr: result = vint.pack('V', len(self.linux_xattr)) for name, value in self.linux_xattr: result += vint.pack('ss', name, value) return result else: return None def _load_linux_xattr_rec(self, file): data = vint.read_bvec(file) memfile = BytesIO(data) result = [] for i in range(vint.read_vuint(memfile)): key = vint.read_bvec(memfile) value = vint.read_bvec(memfile) result.append((key, value)) self.linux_xattr = result def _apply_linux_xattr_rec(self, path, restore_numeric_ids=False): if not xattr: if self.linux_xattr: add_error("%s: can't restore xattr; xattr support missing.\n" % path) return if not self.linux_xattr: return try: existing_xattrs = set(xattr.list(path, nofollow=True)) except IOError as e: if e.errno == errno.EACCES: raise ApplyError('xattr.set %r: %s' % (path, e)) else: raise for k, v in self.linux_xattr: if k not in existing_xattrs \ or v != xattr.get(path, k, nofollow=True): try: xattr.set(path, k, v, nofollow=True) except IOError as e: if e.errno == errno.EPERM \ or e.errno == errno.EOPNOTSUPP: raise ApplyError('xattr.set %r: %s' % (path, e)) else: raise existing_xattrs -= frozenset([k]) for k in existing_xattrs: try: xattr.remove(path, k, nofollow=True) except IOError as e: if e.errno in (errno.EPERM, errno.EACCES): raise ApplyError('xattr.remove %r: %s' % (path, e)) else: raise def __init__(self): self.mode = self.uid = self.gid = self.user = self.group = None self.atime = self.mtime = self.ctime = None # optional members self.path = None self.size = None self.symlink_target = None self.hardlink_target = None self.linux_attr = None self.linux_xattr = None self.posix1e_acl = None def __repr__(self): result = ['<%s instance at %s' % (self.__class__, hex(id(self)))] if self.path: result += ' path:' + repr(self.path) if self.mode: result += ' mode:' + repr(xstat.mode_str(self.mode) + '(%s)' % hex(self.mode)) if self.uid: result += ' uid:' + str(self.uid) if self.gid: result += ' gid:' + str(self.gid) if self.user: result += ' user:' + repr(self.user) if self.group: result += ' group:' + repr(self.group) if self.size: result += ' size:' + repr(self.size) for name, val in (('atime', self.atime), ('mtime', self.mtime), ('ctime', self.ctime)): result += ' %s:%r' \ % (name, time.strftime('%Y-%m-%d %H:%M %z', time.gmtime(xstat.fstime_floor_secs(val)))) result += '>' return ''.join(result) def write(self, port, include_path=True): records = include_path and [(_rec_tag_path, self._encode_path())] or [] records.extend([(_rec_tag_common_v2, self._encode_common()), (_rec_tag_symlink_target, self._encode_symlink_target()), (_rec_tag_hardlink_target, self._encode_hardlink_target()), (_rec_tag_posix1e_acl, self._encode_posix1e_acl()), (_rec_tag_linux_attr, self._encode_linux_attr()), (_rec_tag_linux_xattr, self._encode_linux_xattr())]) for tag, data in records: if data: vint.write_vuint(port, tag) vint.write_bvec(port, data) vint.write_vuint(port, _rec_tag_end) def encode(self, include_path=True): port = BytesIO() self.write(port, include_path) return port.getvalue() @staticmethod def read(port): # This method should either return a valid Metadata object, # return None if there was no information at all (just a # _rec_tag_end), throw EOFError if there was nothing at all to # read, or throw an Exception if a valid object could not be # read completely. tag = vint.read_vuint(port) if tag == _rec_tag_end: return None try: # From here on, EOF is an error. result = Metadata() while True: # only exit is error (exception) or _rec_tag_end if tag == _rec_tag_path: result._load_path_rec(port) elif tag == _rec_tag_common_v2: result._load_common_rec(port) elif tag == _rec_tag_symlink_target: result._load_symlink_target_rec(port) elif tag == _rec_tag_hardlink_target: result._load_hardlink_target_rec(port) elif tag == _rec_tag_posix1e_acl: result._load_posix1e_acl_rec(port) elif tag == _rec_tag_linux_attr: result._load_linux_attr_rec(port) elif tag == _rec_tag_linux_xattr: result._load_linux_xattr_rec(port) elif tag == _rec_tag_end: return result elif tag == _rec_tag_common: # Should be very rare. result._load_common_rec(port, legacy_format = True) else: # unknown record vint.skip_bvec(port) tag = vint.read_vuint(port) except EOFError: raise Exception("EOF while reading Metadata") def isdir(self): return stat.S_ISDIR(self.mode) def create_path(self, path, create_symlinks=True): self._create_via_common_rec(path, create_symlinks=create_symlinks) def apply_to_path(self, path=None, restore_numeric_ids=False): # apply metadata to path -- file must exist if not path: path = self.path if not path: raise Exception('Metadata.apply_to_path() called with no path') if not self._recognized_file_type(): add_error('not applying metadata to "%s"' % path + ' with unrecognized mode "0x%x"\n' % self.mode) return num_ids = restore_numeric_ids for apply_metadata in (self._apply_common_rec, self._apply_posix1e_acl_rec, self._apply_linux_attr_rec, self._apply_linux_xattr_rec): try: apply_metadata(path, restore_numeric_ids=num_ids) except ApplyError as e: add_error(e) def same_file(self, other): """Compare this to other for equivalency. Return true if their information implies they could represent the same file on disk, in the hardlink sense. Assume they're both regular files.""" return self._same_common(other) \ and self._same_hardlink_target(other) \ and self._same_posix1e_acl(other) \ and self._same_linux_attr(other) \ and self._same_linux_xattr(other) def from_path(path, statinfo=None, archive_path=None, save_symlinks=True, hardlink_target=None): result = Metadata() result.path = archive_path st = statinfo or xstat.lstat(path) result.size = st.st_size result._add_common(path, st) if save_symlinks: result._add_symlink_target(path, st) result._add_hardlink_target(hardlink_target) result._add_posix1e_acl(path, st) result._add_linux_attr(path, st) result._add_linux_xattr(path, st) return result def save_tree(output_file, paths, recurse=False, write_paths=True, save_symlinks=True, xdev=False): # Issue top-level rewrite warnings. for path in paths: safe_path = _clean_up_path_for_archive(path) if safe_path != path: log('archiving "%s" as "%s"\n' % (path, safe_path)) if not recurse: for p in paths: safe_path = _clean_up_path_for_archive(p) st = xstat.lstat(p) if stat.S_ISDIR(st.st_mode): safe_path += '/' m = from_path(p, statinfo=st, archive_path=safe_path, save_symlinks=save_symlinks) if verbose: print >> sys.stderr, m.path m.write(output_file, include_path=write_paths) else: start_dir = os.getcwd() try: for (p, st) in recursive_dirlist(paths, xdev=xdev): dirlist_dir = os.getcwd() os.chdir(start_dir) safe_path = _clean_up_path_for_archive(p) m = from_path(p, statinfo=st, archive_path=safe_path, save_symlinks=save_symlinks) if verbose: print >> sys.stderr, m.path m.write(output_file, include_path=write_paths) os.chdir(dirlist_dir) finally: os.chdir(start_dir) def _set_up_path(meta, create_symlinks=True): # Allow directories to exist as a special case -- might have # been created by an earlier longer path. if meta.isdir(): mkdirp(meta.path) else: parent = os.path.dirname(meta.path) if parent: mkdirp(parent) meta.create_path(meta.path, create_symlinks=create_symlinks) all_fields = frozenset(['path', 'mode', 'link-target', 'rdev', 'size', 'uid', 'gid', 'user', 'group', 'atime', 'mtime', 'ctime', 'linux-attr', 'linux-xattr', 'posix1e-acl']) def summary_str(meta, numeric_ids = False, classification = None, human_readable = False): """Return a string containing the "ls -l" style listing for meta. Classification may be "all", "type", or None.""" user_str = group_str = size_or_dev_str = '?' symlink_target = None if meta: name = meta.path mode_str = xstat.mode_str(meta.mode) symlink_target = meta.symlink_target mtime_secs = xstat.fstime_floor_secs(meta.mtime) mtime_str = time.strftime('%Y-%m-%d %H:%M', time.localtime(mtime_secs)) if meta.user and not numeric_ids: user_str = meta.user elif meta.uid != None: user_str = str(meta.uid) if meta.group and not numeric_ids: group_str = meta.group elif meta.gid != None: group_str = str(meta.gid) if stat.S_ISCHR(meta.mode) or stat.S_ISBLK(meta.mode): if meta.rdev: size_or_dev_str = '%d,%d' % (os.major(meta.rdev), os.minor(meta.rdev)) elif meta.size != None: if human_readable: size_or_dev_str = format_filesize(meta.size) else: size_or_dev_str = str(meta.size) else: size_or_dev_str = '-' if classification: classification_str = \ xstat.classification_str(meta.mode, classification == 'all') else: mode_str = '?' * 10 mtime_str = '????-??-?? ??:??' classification_str = '?' name = name or '' if classification: name += classification_str if symlink_target: name += ' -> ' + meta.symlink_target return '%-10s %-11s %11s %16s %s' % (mode_str, user_str + "/" + group_str, size_or_dev_str, mtime_str, name) def detailed_str(meta, fields = None): # FIXME: should optional fields be omitted, or empty i.e. "rdev: # 0", "link-target:", etc. if not fields: fields = all_fields result = [] if 'path' in fields: path = meta.path or '' result.append('path: ' + path) if 'mode' in fields: result.append('mode: %s (%s)' % (oct(meta.mode), xstat.mode_str(meta.mode))) if 'link-target' in fields and stat.S_ISLNK(meta.mode): result.append('link-target: ' + meta.symlink_target) if 'rdev' in fields: if meta.rdev: result.append('rdev: %d,%d' % (os.major(meta.rdev), os.minor(meta.rdev))) else: result.append('rdev: 0') if 'size' in fields and meta.size: result.append('size: ' + str(meta.size)) if 'uid' in fields: result.append('uid: ' + str(meta.uid)) if 'gid' in fields: result.append('gid: ' + str(meta.gid)) if 'user' in fields: result.append('user: ' + meta.user) if 'group' in fields: result.append('group: ' + meta.group) if 'atime' in fields: # If we don't have xstat.lutime, that means we have to use # utime(), and utime() has no way to set the mtime/atime of a # symlink. Thus, the mtime/atime of a symlink is meaningless, # so let's not report it. (That way scripts comparing # before/after won't trigger.) if xstat.lutime or not stat.S_ISLNK(meta.mode): result.append('atime: ' + xstat.fstime_to_sec_str(meta.atime)) else: result.append('atime: 0') if 'mtime' in fields: if xstat.lutime or not stat.S_ISLNK(meta.mode): result.append('mtime: ' + xstat.fstime_to_sec_str(meta.mtime)) else: result.append('mtime: 0') if 'ctime' in fields: result.append('ctime: ' + xstat.fstime_to_sec_str(meta.ctime)) if 'linux-attr' in fields and meta.linux_attr: result.append('linux-attr: ' + hex(meta.linux_attr)) if 'linux-xattr' in fields and meta.linux_xattr: for name, value in meta.linux_xattr: result.append('linux-xattr: %s -> %s' % (name, repr(value))) if 'posix1e-acl' in fields and meta.posix1e_acl: acl = meta.posix1e_acl[0] result.append('posix1e-acl: ' + acl + '\n') if stat.S_ISDIR(meta.mode): def_acl = meta.posix1e_acl[2] result.append('posix1e-acl-default: ' + def_acl + '\n') return '\n'.join(result) class _ArchiveIterator: def next(self): try: return Metadata.read(self._file) except EOFError: raise StopIteration() def __iter__(self): return self def __init__(self, file): self._file = file def display_archive(file): if verbose > 1: first_item = True for meta in _ArchiveIterator(file): if not first_item: print print detailed_str(meta) first_item = False elif verbose > 0: for meta in _ArchiveIterator(file): print summary_str(meta) elif verbose == 0: for meta in _ArchiveIterator(file): if not meta.path: print >> sys.stderr, \ 'bup: no metadata path, but asked to only display path', \ '(increase verbosity?)' sys.exit(1) print meta.path def start_extract(file, create_symlinks=True): for meta in _ArchiveIterator(file): if not meta: # Hit end record. break if verbose: print >> sys.stderr, meta.path xpath = _clean_up_extract_path(meta.path) if not xpath: add_error(Exception('skipping risky path "%s"' % meta.path)) else: meta.path = xpath _set_up_path(meta, create_symlinks=create_symlinks) def finish_extract(file, restore_numeric_ids=False): all_dirs = [] for meta in _ArchiveIterator(file): if not meta: # Hit end record. break xpath = _clean_up_extract_path(meta.path) if not xpath: add_error(Exception('skipping risky path "%s"' % dir.path)) else: if os.path.isdir(meta.path): all_dirs.append(meta) else: if verbose: print >> sys.stderr, meta.path meta.apply_to_path(path=xpath, restore_numeric_ids=restore_numeric_ids) all_dirs.sort(key = lambda x : len(x.path), reverse=True) for dir in all_dirs: # Don't need to check xpath -- won't be in all_dirs if not OK. xpath = _clean_up_extract_path(dir.path) if verbose: print >> sys.stderr, dir.path dir.apply_to_path(path=xpath, restore_numeric_ids=restore_numeric_ids) def extract(file, restore_numeric_ids=False, create_symlinks=True): # For now, just store all the directories and handle them last, # longest first. all_dirs = [] for meta in _ArchiveIterator(file): if not meta: # Hit end record. break xpath = _clean_up_extract_path(meta.path) if not xpath: add_error(Exception('skipping risky path "%s"' % meta.path)) else: meta.path = xpath if verbose: print >> sys.stderr, '+', meta.path _set_up_path(meta, create_symlinks=create_symlinks) if os.path.isdir(meta.path): all_dirs.append(meta) else: if verbose: print >> sys.stderr, '=', meta.path meta.apply_to_path(restore_numeric_ids=restore_numeric_ids) all_dirs.sort(key = lambda x : len(x.path), reverse=True) for dir in all_dirs: # Don't need to check xpath -- won't be in all_dirs if not OK. xpath = _clean_up_extract_path(dir.path) if verbose: print >> sys.stderr, '=', xpath # Shouldn't have to check for risky paths here (omitted above). dir.apply_to_path(path=dir.path, restore_numeric_ids=restore_numeric_ids) bup-0.29/lib/bup/midx.py000066400000000000000000000103211303127641400150770ustar00rootroot00000000000000 import git, glob, mmap, os, struct from bup import _helpers from bup.helpers import log, mmap_read MIDX_VERSION = 4 extract_bits = _helpers.extract_bits _total_searches = 0 _total_steps = 0 class PackMidx: """Wrapper which contains data from multiple index files. Multiple index (.midx) files constitute a wrapper around index (.idx) files and make it possible for bup to expand Git's indexing capabilities to vast amounts of files. """ def __init__(self, filename): self.name = filename self.force_keep = False self.map = None assert(filename.endswith('.midx')) self.map = mmap_read(open(filename)) if str(self.map[0:4]) != 'MIDX': log('Warning: skipping: invalid MIDX header in %r\n' % filename) self.force_keep = True return self._init_failed() ver = struct.unpack('!I', self.map[4:8])[0] if ver < MIDX_VERSION: log('Warning: ignoring old-style (v%d) midx %r\n' % (ver, filename)) self.force_keep = False # old stuff is boring return self._init_failed() if ver > MIDX_VERSION: log('Warning: ignoring too-new (v%d) midx %r\n' % (ver, filename)) self.force_keep = True # new stuff is exciting return self._init_failed() self.bits = _helpers.firstword(self.map[8:12]) self.entries = 2**self.bits self.fanout = buffer(self.map, 12, self.entries*4) self.sha_ofs = 12 + self.entries*4 self.nsha = nsha = self._fanget(self.entries-1) self.shatable = buffer(self.map, self.sha_ofs, nsha*20) self.which_ofs = self.sha_ofs + 20*nsha self.whichlist = buffer(self.map, self.which_ofs, nsha*4) self.idxnames = str(self.map[self.which_ofs + 4*nsha:]).split('\0') def __del__(self): self.close() def _init_failed(self): self.bits = 0 self.entries = 1 self.fanout = buffer('\0\0\0\0') self.shatable = buffer('\0'*20) self.idxnames = [] def _fanget(self, i): start = i*4 s = self.fanout[start:start+4] return _helpers.firstword(s) def _get(self, i): return str(self.shatable[i*20:(i+1)*20]) def _get_idx_i(self, i): return struct.unpack('!I', self.whichlist[i*4:(i+1)*4])[0] def _get_idxname(self, i): return self.idxnames[self._get_idx_i(i)] def close(self): if self.map is not None: self.map.close() self.map = None def exists(self, hash, want_source=False): """Return nonempty if the object exists in the index files.""" global _total_searches, _total_steps _total_searches += 1 want = str(hash) el = extract_bits(want, self.bits) if el: start = self._fanget(el-1) startv = el << (32-self.bits) else: start = 0 startv = 0 end = self._fanget(el) endv = (el+1) << (32-self.bits) _total_steps += 1 # lookup table is a step hashv = _helpers.firstword(hash) #print '(%08x) %08x %08x %08x' % (extract_bits(want, 32), startv, hashv, endv) while start < end: _total_steps += 1 #print '! %08x %08x %08x %d - %d' % (startv, hashv, endv, start, end) mid = start + (hashv-startv)*(end-start-1)/(endv-startv) #print ' %08x %08x %08x %d %d %d' % (startv, hashv, endv, start, mid, end) v = self._get(mid) #print ' %08x' % self._num(v) if v < want: start = mid+1 startv = _helpers.firstword(v) elif v > want: end = mid endv = _helpers.firstword(v) else: # got it! return want_source and self._get_idxname(mid) or True return None def __iter__(self): for i in xrange(self._fanget(self.entries-1)): yield buffer(self.shatable, i*20, 20) def __len__(self): return int(self._fanget(self.entries-1)) def clear_midxes(dir=None): dir = dir or git.repo('objects/pack') for midx in glob.glob(os.path.join(dir, '*.midx')): os.unlink(midx) bup-0.29/lib/bup/options.py000066400000000000000000000234701303127641400156420ustar00rootroot00000000000000# Copyright 2010-2012 Avery Pennarun and options.py contributors. # All rights reserved. # # (This license applies to this file but not necessarily the other files in # this package.) # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # 1. Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # # 2. Redistributions in binary form must reproduce the above copyright # notice, this list of conditions and the following disclaimer in # the documentation and/or other materials provided with the # distribution. # # THIS SOFTWARE IS PROVIDED BY AVERY PENNARUN ``AS IS'' AND ANY # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR # PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL OR # CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, # EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, # PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR # PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF # LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # """Command-line options parser. With the help of an options spec string, easily parse command-line options. An options spec is made up of two parts, separated by a line with two dashes. The first part is the synopsis of the command and the second one specifies options, one per line. Each non-empty line in the synopsis gives a set of options that can be used together. Option flags must be at the begining of the line and multiple flags are separated by commas. Usually, options have a short, one character flag, and a longer one, but the short one can be omitted. Long option flags are used as the option's key for the OptDict produced when parsing options. When the flag definition is ended with an equal sign, the option takes one string as an argument, and that string will be converted to an integer when possible. Otherwise, the option does not take an argument and corresponds to a boolean flag that is true when the option is given on the command line. The option's description is found at the right of its flags definition, after one or more spaces. The description ends at the end of the line. If the description contains text enclosed in square brackets, the enclosed text will be used as the option's default value. Options can be put in different groups. Options in the same group must be on consecutive lines. Groups are formed by inserting a line that begins with a space. The text on that line will be output after an empty line. """ import sys, os, textwrap, getopt, re, struct def _invert(v, invert): if invert: return not v return v def _remove_negative_kv(k, v): if k.startswith('no-') or k.startswith('no_'): return k[3:], not v return k,v class OptDict(object): """Dictionary that exposes keys as attributes. Keys can be set or accessed with a "no-" or "no_" prefix to negate the value. """ def __init__(self, aliases): self._opts = {} self._aliases = aliases def _unalias(self, k): k, reinvert = _remove_negative_kv(k, False) k, invert = self._aliases[k] return k, invert ^ reinvert def __setitem__(self, k, v): k, invert = self._unalias(k) self._opts[k] = _invert(v, invert) def __getitem__(self, k): k, invert = self._unalias(k) return _invert(self._opts[k], invert) def __getattr__(self, k): return self[k] def _default_onabort(msg): sys.exit(97) def _intify(v): try: vv = int(v or '') if str(vv) == v: return vv except ValueError: pass return v def _atoi(v): try: return int(v or 0) except ValueError: return 0 def _tty_width(): s = struct.pack("HHHH", 0, 0, 0, 0) try: import fcntl, termios s = fcntl.ioctl(sys.stderr.fileno(), termios.TIOCGWINSZ, s) except (IOError, ImportError): return _atoi(os.environ.get('WIDTH')) or 70 (ysize,xsize,ypix,xpix) = struct.unpack('HHHH', s) return xsize or 70 class Options: """Option parser. When constructed, a string called an option spec must be given. It specifies the synopsis and option flags and their description. For more information about option specs, see the docstring at the top of this file. Two optional arguments specify an alternative parsing function and an alternative behaviour on abort (after having output the usage string). By default, the parser function is getopt.gnu_getopt, and the abort behaviour is to exit the program. """ def __init__(self, optspec, optfunc=getopt.gnu_getopt, onabort=_default_onabort): self.optspec = optspec self._onabort = onabort self.optfunc = optfunc self._aliases = {} self._shortopts = 'h?' self._longopts = ['help', 'usage'] self._hasparms = {} self._defaults = {} self._usagestr = self._gen_usage() # this also parses the optspec def _gen_usage(self): out = [] lines = self.optspec.strip().split('\n') lines.reverse() first_syn = True while lines: l = lines.pop() if l == '--': break out.append('%s: %s\n' % (first_syn and 'usage' or ' or', l)) first_syn = False out.append('\n') last_was_option = False while lines: l = lines.pop() if l.startswith(' '): out.append('%s%s\n' % (last_was_option and '\n' or '', l.lstrip())) last_was_option = False elif l: (flags,extra) = (l + ' ').split(' ', 1) extra = extra.strip() if flags.endswith('='): flags = flags[:-1] has_parm = 1 else: has_parm = 0 g = re.search(r'\[([^\]]*)\]$', extra) if g: defval = _intify(g.group(1)) else: defval = None flagl = flags.split(',') flagl_nice = [] flag_main, invert_main = _remove_negative_kv(flagl[0], False) self._defaults[flag_main] = _invert(defval, invert_main) for _f in flagl: f,invert = _remove_negative_kv(_f, 0) self._aliases[f] = (flag_main, invert_main ^ invert) self._hasparms[f] = has_parm if f == '#': self._shortopts += '0123456789' flagl_nice.append('-#') elif len(f) == 1: self._shortopts += f + (has_parm and ':' or '') flagl_nice.append('-' + f) else: f_nice = re.sub(r'\W', '_', f) self._aliases[f_nice] = (flag_main, invert_main ^ invert) self._longopts.append(f + (has_parm and '=' or '')) self._longopts.append('no-' + f) flagl_nice.append('--' + _f) flags_nice = ', '.join(flagl_nice) if has_parm: flags_nice += ' ...' prefix = ' %-20s ' % flags_nice argtext = '\n'.join(textwrap.wrap(extra, width=_tty_width(), initial_indent=prefix, subsequent_indent=' '*28)) out.append(argtext + '\n') last_was_option = True else: out.append('\n') last_was_option = False return ''.join(out).rstrip() + '\n' def usage(self, msg=""): """Print usage string to stderr and abort.""" sys.stderr.write(self._usagestr) if msg: sys.stderr.write(msg) e = self._onabort and self._onabort(msg) or None if e: raise e def fatal(self, msg): """Print an error message to stderr and abort with usage string.""" msg = '\nerror: %s\n' % msg return self.usage(msg) def parse(self, args): """Parse a list of arguments and return (options, flags, extra). In the returned tuple, "options" is an OptDict with known options, "flags" is a list of option flags that were used on the command-line, and "extra" is a list of positional arguments. """ try: (flags,extra) = self.optfunc(args, self._shortopts, self._longopts) except getopt.GetoptError as e: self.fatal(e) opt = OptDict(aliases=self._aliases) for k,v in self._defaults.iteritems(): opt[k] = v for (k,v) in flags: k = k.lstrip('-') if k in ('h', '?', 'help', 'usage'): self.usage() if (self._aliases.get('#') and k in ('0','1','2','3','4','5','6','7','8','9')): v = int(k) # guaranteed to be exactly one digit k, invert = self._aliases['#'] opt['#'] = v else: k, invert = opt._unalias(k) if not self._hasparms[k]: assert(v == '') v = (opt._opts.get(k) or 0) + 1 else: v = _intify(v) opt[k] = _invert(v, invert) return (opt,flags,extra) bup-0.29/lib/bup/path.py000066400000000000000000000005221303127641400150740ustar00rootroot00000000000000"""This is a separate module so we can cleanly getcwd() before anyone does chdir(). """ import sys, os startdir = os.getcwd() def exe(): return (os.environ.get('BUP_MAIN_EXE') or os.path.join(startdir, sys.argv[0])) def exedir(): return os.path.split(exe())[0] def exefile(): return os.path.split(exe())[1] bup-0.29/lib/bup/rm.py000066400000000000000000000124111303127641400145560ustar00rootroot00000000000000 import sys from bup import git, vfs from bup.client import ClientError from bup.git import get_commit_items from bup.helpers import add_error, die_if_errors, log, saved_errors def append_commit(hash, parent, cp, writer): ci = get_commit_items(hash, cp) tree = ci.tree.decode('hex') author = '%s <%s>' % (ci.author_name, ci.author_mail) committer = '%s <%s>' % (ci.committer_name, ci.committer_mail) c = writer.new_commit(tree, parent, author, ci.author_sec, ci.author_offset, committer, ci.committer_sec, ci.committer_offset, ci.message) return c, tree def filter_branch(tip_commit_hex, exclude, writer): # May return None if everything is excluded. commits = [c for _, c in git.rev_list(tip_commit_hex)] commits.reverse() last_c, tree = None, None # Rather than assert that we always find an exclusion here, we'll # just let the StopIteration signal the error. first_exclusion = next(i for i, c in enumerate(commits) if exclude(c)) if first_exclusion != 0: last_c = commits[first_exclusion - 1] tree = get_commit_items(last_c.encode('hex'), git.cp()).tree.decode('hex') commits = commits[first_exclusion:] for c in commits: if exclude(c): continue last_c, tree = append_commit(c.encode('hex'), last_c, git.cp(), writer) return last_c def rm_saves(saves, writer): assert(saves) branch_node = saves[0].parent for save in saves: # Be certain they're all on the same branch assert(save.parent == branch_node) rm_commits = frozenset([x.dereference().hash for x in saves]) orig_tip = branch_node.hash new_tip = filter_branch(orig_tip.encode('hex'), lambda x: x in rm_commits, writer) assert(orig_tip) assert(new_tip != orig_tip) return orig_tip, new_tip def dead_items(vfs_top, paths): """Return an optimized set of removals, reporting errors via add_error, and if there are any errors, return None, None.""" dead_branches = {} dead_saves = {} # Scan for bad requests, and opportunities to optimize for path in paths: try: n = vfs_top.lresolve(path) except vfs.NodeError as e: add_error('unable to resolve %s: %s' % (path, e)) else: if isinstance(n, vfs.BranchList): # rm /foo branchname = n.name dead_branches[branchname] = n dead_saves.pop(branchname, None) # rm /foo obviates rm /foo/bar elif isinstance(n, vfs.FakeSymlink) and isinstance(n.parent, vfs.BranchList): if n.name == 'latest': add_error("error: cannot delete 'latest' symlink") else: branchname = n.parent.name if branchname not in dead_branches: dead_saves.setdefault(branchname, []).append(n) else: add_error("don't know how to remove %r yet" % n.fullname()) if saved_errors: return None, None return dead_branches, dead_saves def bup_rm(paths, compression=6, verbosity=None): root = vfs.RefList(None) dead_branches, dead_saves = dead_items(root, paths) die_if_errors('not proceeding with any removals\n') updated_refs = {} # ref_name -> (original_ref, tip_commit(bin)) for branch, node in dead_branches.iteritems(): ref = 'refs/heads/' + branch assert(not ref in updated_refs) updated_refs[ref] = (node.hash, None) if dead_saves: writer = git.PackWriter(compression_level=compression) try: for branch, saves in dead_saves.iteritems(): assert(saves) updated_refs['refs/heads/' + branch] = rm_saves(saves, writer) except: if writer: writer.abort() raise else: if writer: # Must close before we can update the ref(s) below. writer.close() # Only update the refs here, at the very end, so that if something # goes wrong above, the old refs will be undisturbed. Make an attempt # to update each ref. for ref_name, info in updated_refs.iteritems(): orig_ref, new_ref = info try: if not new_ref: git.delete_ref(ref_name, orig_ref.encode('hex')) else: git.update_ref(ref_name, new_ref, orig_ref) if verbosity: new_hex = new_ref.encode('hex') if orig_ref: orig_hex = orig_ref.encode('hex') log('updated %r (%s -> %s)\n' % (ref_name, orig_hex, new_hex)) else: log('updated %r (%s)\n' % (ref_name, new_hex)) except (git.GitError, ClientError) as ex: if new_ref: add_error('while trying to update %r (%s -> %s): %s' % (ref_name, orig_ref, new_ref, ex)) else: add_error('while trying to delete %r (%s): %s' % (ref_name, orig_ref, ex)) bup-0.29/lib/bup/shquote.py000066400000000000000000000114551303127641400156370ustar00rootroot00000000000000import re q = "'" qq = '"' class QuoteError(Exception): pass def _quotesplit(line): inquote = None inescape = None wordstart = 0 word = '' for i in range(len(line)): c = line[i] if inescape: if inquote == q and c != q: word += '\\' # single-q backslashes can only quote single-q word += c inescape = False elif c == '\\': inescape = True elif c == inquote: inquote = None # this is un-sh-like, but do it for sanity when autocompleting yield (wordstart, word) word = '' wordstart = i+1 elif not inquote and not word and (c == q or c == qq): # the 'not word' constraint on this is un-sh-like, but do it # for sanity when autocompleting inquote = c wordstart = i elif not inquote and c in [' ', '\n', '\r', '\t']: if word: yield (wordstart, word) word = '' wordstart = i+1 else: word += c if word: yield (wordstart, word) if inquote or inescape or word: raise QuoteError() def quotesplit(line): """Split 'line' into a list of offset,word tuples. The words are produced after removing doublequotes, singlequotes, and backslash escapes. Note that this implementation isn't entirely sh-compatible. It only dequotes words that *start* with a quote character, that is, a string like hello"world" will not have its quotes removed, while a string like hello "world" will be turned into [(0, 'hello'), (6, 'world')] (ie. quotes removed). """ l = [] try: for i in _quotesplit(line): l.append(i) except QuoteError: pass return l def unfinished_word(line): """Returns the quotechar,word of any unfinished word at the end of 'line'. You can use this to determine if 'line' is a completely parseable line (ie. one that quotesplit() will finish successfully) or if you need to read more bytes first. Args: line: an input string Returns: quotechar,word: the initial quote char (or None), and the partial word. """ try: for (wordstart,word) in _quotesplit(line): pass except QuoteError: firstchar = line[wordstart] if firstchar in [q, qq]: return (firstchar, word) else: return (None, word) else: return (None, '') def quotify(qtype, word, terminate): """Return a string corresponding to given word, quoted using qtype. The resulting string is dequotable using quotesplit() and can be joined with other quoted strings by adding arbitrary whitespace separators. Args: qtype: one of '', shquote.qq, or shquote.q word: the string to quote. May contain arbitrary characters. terminate: include the trailing quote character, if any. Returns: The quoted string. """ if qtype == qq: return qq + word.replace(qq, '\\"') + (terminate and qq or '') elif qtype == q: return q + word.replace(q, "\\'") + (terminate and q or '') else: return re.sub(r'([\"\' \t\n\r])', r'\\\1', word) def quotify_list(words): """Return a minimally-quoted string produced by quoting each word. This calculates the qtype for each word depending on whether the word already includes singlequote characters, doublequote characters, both, or neither. Args: words: the list of words to quote. Returns: The resulting string, with quoted words separated by ' '. """ wordout = [] for word in words: qtype = q if word and not re.search(r'[\s\"\']', word): qtype = '' elif q in word and qq not in word: qtype = qq wordout.append(quotify(qtype, word, True)) return ' '.join(wordout) def what_to_add(qtype, origword, newword, terminate): """Return a qtype that is needed to finish a partial word. For example, given an origword of '\"frog' and a newword of '\"frogston', returns either: terminate=False: 'ston' terminate=True: 'ston\"' This is useful when calculating tab completion strings for readline. Args: qtype: the type of quoting to use (ie. the first character of origword) origword: the original word that needs completion. newword: the word we want it to be after completion. Must start with origword. terminate: true if we should add the actual quote character at the end. Returns: The string to append to origword to produce (quoted) newword. """ if not newword.startswith(origword): return '' else: qold = quotify(qtype, origword, terminate=False) return quotify(qtype, newword, terminate=terminate)[len(qold):] bup-0.29/lib/bup/ssh.py000066400000000000000000000034331303127641400147410ustar00rootroot00000000000000"""SSH connection. Connect to a remote host via SSH and execute a command on the host. """ import sys, os, re, subprocess from bup import helpers, path def connect(rhost, port, subcmd, stderr=None): """Connect to 'rhost' and execute the bup subcommand 'subcmd' on it.""" assert(not re.search(r'[^\w-]', subcmd)) nicedir = re.sub(r':', "_", path.exedir()) if rhost == '-': rhost = None if not rhost: argv = ['bup', subcmd] else: # WARNING: shell quoting security holes are possible here, so we # have to be super careful. We have to use 'sh -c' because # csh-derived shells can't handle PATH= notation. We can't # set PATH in advance, because ssh probably replaces it. We # can't exec *safely* using argv, because *both* ssh and 'sh -c' # allow shellquoting. So we end up having to double-shellquote # stuff here. escapedir = re.sub(r'([^\w/])', r'\\\\\\\1', nicedir) buglvl = helpers.atoi(os.environ.get('BUP_DEBUG')) force_tty = helpers.atoi(os.environ.get('BUP_FORCE_TTY')) cmd = r""" sh -c PATH=%s:'$PATH BUP_DEBUG=%s BUP_FORCE_TTY=%s bup %s' """ % (escapedir, buglvl, force_tty, subcmd) argv = ['ssh'] if port: argv.extend(('-p', port)) argv.extend((rhost, '--', cmd.strip())) #helpers.log('argv is: %r\n' % argv) def setup(): # runs in the child process if not rhost: os.environ['PATH'] = ':'.join([nicedir, os.environ.get('PATH', '')]) os.setsid() return subprocess.Popen(argv, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=stderr, preexec_fn=setup) bup-0.29/lib/bup/t/000077500000000000000000000000001303127641400140325ustar00rootroot00000000000000bup-0.29/lib/bup/t/__init__.py000066400000000000000000000000441303127641400161410ustar00rootroot00000000000000import sys sys.path[:0] = ['../..'] bup-0.29/lib/bup/t/tbloom.py000066400000000000000000000042451303127641400157050ustar00rootroot00000000000000 import errno, platform, tempfile from wvtest import * from bup import bloom from bup.helpers import mkdirp from buptest import no_lingering_errors, test_tempdir @wvtest def test_bloom(): with no_lingering_errors(): with test_tempdir('bup-tbloom-') as tmpdir: hashes = [os.urandom(20) for i in range(100)] class Idx: pass ix = Idx() ix.name='dummy.idx' ix.shatable = ''.join(hashes) for k in (4, 5): b = bloom.create(tmpdir + '/pybuptest.bloom', expected=100, k=k) b.add_idx(ix) WVPASSLT(b.pfalse_positive(), .1) b.close() b = bloom.ShaBloom(tmpdir + '/pybuptest.bloom') all_present = True for h in hashes: all_present &= b.exists(h) WVPASS(all_present) false_positives = 0 for h in [os.urandom(20) for i in range(1000)]: if b.exists(h): false_positives += 1 WVPASSLT(false_positives, 5) os.unlink(tmpdir + '/pybuptest.bloom') tf = tempfile.TemporaryFile(dir=tmpdir) b = bloom.create('bup.bloom', f=tf, expected=100) WVPASSEQ(b.rwfile, tf) WVPASSEQ(b.k, 5) # Test large (~1GiB) filter. This may fail on s390 (31-bit # architecture), and anywhere else where the address space is # sufficiently limited. tf = tempfile.TemporaryFile(dir=tmpdir) skip_test = False try: b = bloom.create('bup.bloom', f=tf, expected=2**28, delaywrite=False) except EnvironmentError as ex: (ptr_width, linkage) = platform.architecture() if ptr_width == '32bit' and ex.errno == errno.ENOMEM: WVMSG('skipping large bloom filter test (mmap probably failed) ' + str(ex)) skip_test = True else: raise if not skip_test: WVPASSEQ(b.k, 4) bup-0.29/lib/bup/t/tclient.py000066400000000000000000000123431303127641400160510ustar00rootroot00000000000000 import sys, os, stat, time, random, subprocess, glob from wvtest import * from bup import client, git from bup.helpers import mkdirp from buptest import no_lingering_errors, test_tempdir def randbytes(sz): s = '' for i in xrange(sz): s += chr(random.randrange(0,256)) return s s1 = randbytes(10000) s2 = randbytes(10000) s3 = randbytes(10000) IDX_PAT = '/*.idx' @wvtest def test_server_split_with_indexes(): with no_lingering_errors(): with test_tempdir('bup-tclient-') as tmpdir: os.environ['BUP_MAIN_EXE'] = '../../../bup' os.environ['BUP_DIR'] = bupdir = tmpdir git.init_repo(bupdir) lw = git.PackWriter() c = client.Client(bupdir, create=True) rw = c.new_packwriter() lw.new_blob(s1) lw.close() rw.new_blob(s2) rw.breakpoint() rw.new_blob(s1) rw.close() @wvtest def test_multiple_suggestions(): with no_lingering_errors(): with test_tempdir('bup-tclient-') as tmpdir: os.environ['BUP_MAIN_EXE'] = '../../../bup' os.environ['BUP_DIR'] = bupdir = tmpdir git.init_repo(bupdir) lw = git.PackWriter() lw.new_blob(s1) lw.close() lw = git.PackWriter() lw.new_blob(s2) lw.close() WVPASSEQ(len(glob.glob(git.repo('objects/pack'+IDX_PAT))), 2) c = client.Client(bupdir, create=True) WVPASSEQ(len(glob.glob(c.cachedir+IDX_PAT)), 0) rw = c.new_packwriter() s1sha = rw.new_blob(s1) WVPASS(rw.exists(s1sha)) s2sha = rw.new_blob(s2) # This is a little hacky, but ensures that we test the # code under test while (len(glob.glob(c.cachedir+IDX_PAT)) < 2 and not c.conn.has_input()): pass rw.new_blob(s2) WVPASS(rw.objcache.exists(s1sha)) WVPASS(rw.objcache.exists(s2sha)) rw.new_blob(s3) WVPASSEQ(len(glob.glob(c.cachedir+IDX_PAT)), 2) rw.close() WVPASSEQ(len(glob.glob(c.cachedir+IDX_PAT)), 3) @wvtest def test_dumb_client_server(): with no_lingering_errors(): with test_tempdir('bup-tclient-') as tmpdir: os.environ['BUP_MAIN_EXE'] = '../../../bup' os.environ['BUP_DIR'] = bupdir = tmpdir git.init_repo(bupdir) open(git.repo('bup-dumb-server'), 'w').close() lw = git.PackWriter() lw.new_blob(s1) lw.close() c = client.Client(bupdir, create=True) rw = c.new_packwriter() WVPASSEQ(len(glob.glob(c.cachedir+IDX_PAT)), 1) rw.new_blob(s1) WVPASSEQ(len(glob.glob(c.cachedir+IDX_PAT)), 1) rw.new_blob(s2) rw.close() WVPASSEQ(len(glob.glob(c.cachedir+IDX_PAT)), 2) @wvtest def test_midx_refreshing(): with no_lingering_errors(): with test_tempdir('bup-tclient-') as tmpdir: os.environ['BUP_MAIN_EXE'] = bupmain = '../../../bup' os.environ['BUP_DIR'] = bupdir = tmpdir git.init_repo(bupdir) c = client.Client(bupdir, create=True) rw = c.new_packwriter() rw.new_blob(s1) p1base = rw.breakpoint() p1name = os.path.join(c.cachedir, p1base) s1sha = rw.new_blob(s1) # should not be written; it's already in p1 s2sha = rw.new_blob(s2) p2base = rw.close() p2name = os.path.join(c.cachedir, p2base) del rw pi = git.PackIdxList(bupdir + '/objects/pack') WVPASSEQ(len(pi.packs), 2) pi.refresh() WVPASSEQ(len(pi.packs), 2) WVPASSEQ(sorted([os.path.basename(i.name) for i in pi.packs]), sorted([p1base, p2base])) p1 = git.open_idx(p1name) WVPASS(p1.exists(s1sha)) p2 = git.open_idx(p2name) WVFAIL(p2.exists(s1sha)) WVPASS(p2.exists(s2sha)) subprocess.call([bupmain, 'midx', '-f']) pi.refresh() WVPASSEQ(len(pi.packs), 1) pi.refresh(skip_midx=True) WVPASSEQ(len(pi.packs), 2) pi.refresh(skip_midx=False) WVPASSEQ(len(pi.packs), 1) @wvtest def test_remote_parsing(): with no_lingering_errors(): tests = ( (':/bup', ('file', None, None, '/bup')), ('file:///bup', ('file', None, None, '/bup')), ('192.168.1.1:/bup', ('ssh', '192.168.1.1', None, '/bup')), ('ssh://192.168.1.1:2222/bup', ('ssh', '192.168.1.1', '2222', '/bup')), ('ssh://[ff:fe::1]:2222/bup', ('ssh', 'ff:fe::1', '2222', '/bup')), ('bup://foo.com:1950', ('bup', 'foo.com', '1950', None)), ('bup://foo.com:1950/bup', ('bup', 'foo.com', '1950', '/bup')), ('bup://[ff:fe::1]/bup', ('bup', 'ff:fe::1', None, '/bup')),) for remote, values in tests: WVPASSEQ(client.parse_remote(remote), values) try: client.parse_remote('http://asdf.com/bup') WVFAIL() except client.ClientError: WVPASS() bup-0.29/lib/bup/t/tgit.py000066400000000000000000000431211303127641400153540ustar00rootroot00000000000000 from subprocess import check_call import struct, os, time from wvtest import * from bup import git from bup.helpers import localtime, log, mkdirp, readpipe from buptest import no_lingering_errors, test_tempdir top_dir = os.path.realpath('../../..') bup_exe = top_dir + '/bup' def exc(*cmd): cmd_str = ' '.join(cmd) print >> sys.stderr, cmd_str check_call(cmd) def exo(*cmd): cmd_str = ' '.join(cmd) print >> sys.stderr, cmd_str return readpipe(cmd) @wvtest def testmangle(): with no_lingering_errors(): afile = 0100644 afile2 = 0100770 alink = 0120000 adir = 0040000 adir2 = 0040777 WVPASSEQ(git.mangle_name("a", adir2, adir), "a") WVPASSEQ(git.mangle_name(".bup", adir2, adir), ".bup.bupl") WVPASSEQ(git.mangle_name("a.bupa", adir2, adir), "a.bupa.bupl") WVPASSEQ(git.mangle_name("b.bup", alink, alink), "b.bup.bupl") WVPASSEQ(git.mangle_name("b.bu", alink, alink), "b.bu") WVPASSEQ(git.mangle_name("f", afile, afile2), "f") WVPASSEQ(git.mangle_name("f.bup", afile, afile2), "f.bup.bupl") WVPASSEQ(git.mangle_name("f.bup", afile, adir), "f.bup.bup") WVPASSEQ(git.mangle_name("f", afile, adir), "f.bup") WVPASSEQ(git.demangle_name("f.bup", afile), ("f", git.BUP_CHUNKED)) WVPASSEQ(git.demangle_name("f.bupl", afile), ("f", git.BUP_NORMAL)) WVPASSEQ(git.demangle_name("f.bup.bupl", afile), ("f.bup", git.BUP_NORMAL)) WVPASSEQ(git.demangle_name(".bupm", afile), ('', git.BUP_NORMAL)) WVPASSEQ(git.demangle_name(".bupm", adir), ('', git.BUP_CHUNKED)) # for safety, we ignore .bup? suffixes we don't recognize. Future # versions might implement a .bup[a-z] extension as something other # than BUP_NORMAL. WVPASSEQ(git.demangle_name("f.bupa", afile), ("f.bupa", git.BUP_NORMAL)) @wvtest def testencode(): with no_lingering_errors(): s = 'hello world' looseb = ''.join(git._encode_looseobj('blob', s)) looset = ''.join(git._encode_looseobj('tree', s)) loosec = ''.join(git._encode_looseobj('commit', s)) packb = ''.join(git._encode_packobj('blob', s)) packt = ''.join(git._encode_packobj('tree', s)) packc = ''.join(git._encode_packobj('commit', s)) WVPASSEQ(git._decode_looseobj(looseb), ('blob', s)) WVPASSEQ(git._decode_looseobj(looset), ('tree', s)) WVPASSEQ(git._decode_looseobj(loosec), ('commit', s)) WVPASSEQ(git._decode_packobj(packb), ('blob', s)) WVPASSEQ(git._decode_packobj(packt), ('tree', s)) WVPASSEQ(git._decode_packobj(packc), ('commit', s)) for i in xrange(10): WVPASS(git._encode_looseobj('blob', s, compression_level=i)) def encode_pobj(n): return ''.join(git._encode_packobj('blob', s, compression_level=n)) WVEXCEPT(ValueError, encode_pobj, -1) WVEXCEPT(ValueError, encode_pobj, 10) WVEXCEPT(ValueError, encode_pobj, 'x') @wvtest def testpacks(): with no_lingering_errors(): with test_tempdir('bup-tgit-') as tmpdir: os.environ['BUP_MAIN_EXE'] = bup_exe os.environ['BUP_DIR'] = bupdir = tmpdir + "/bup" git.init_repo(bupdir) git.verbose = 1 w = git.PackWriter() w.new_blob(os.urandom(100)) w.new_blob(os.urandom(100)) w.abort() w = git.PackWriter() hashes = [] nobj = 1000 for i in range(nobj): hashes.append(w.new_blob(str(i))) log('\n') nameprefix = w.close() print repr(nameprefix) WVPASS(os.path.exists(nameprefix + '.pack')) WVPASS(os.path.exists(nameprefix + '.idx')) r = git.open_idx(nameprefix + '.idx') print repr(r.fanout) for i in range(nobj): WVPASS(r.find_offset(hashes[i]) > 0) WVPASS(r.exists(hashes[99])) WVFAIL(r.exists('\0'*20)) pi = iter(r) for h in sorted(hashes): WVPASSEQ(str(pi.next()).encode('hex'), h.encode('hex')) WVFAIL(r.find_offset('\0'*20)) r = git.PackIdxList(bupdir + '/objects/pack') WVPASS(r.exists(hashes[5])) WVPASS(r.exists(hashes[6])) WVFAIL(r.exists('\0'*20)) @wvtest def test_pack_name_lookup(): with no_lingering_errors(): with test_tempdir('bup-tgit-') as tmpdir: os.environ['BUP_MAIN_EXE'] = bup_exe os.environ['BUP_DIR'] = bupdir = tmpdir + "/bup" git.init_repo(bupdir) git.verbose = 1 packdir = git.repo('objects/pack') idxnames = [] hashes = [] for start in range(0,28,2): w = git.PackWriter() for i in range(start, start+2): hashes.append(w.new_blob(str(i))) log('\n') idxnames.append(os.path.basename(w.close() + '.idx')) r = git.PackIdxList(packdir) WVPASSEQ(len(r.packs), 2) for e,idxname in enumerate(idxnames): for i in range(e*2, (e+1)*2): WVPASSEQ(r.exists(hashes[i], want_source=True), idxname) @wvtest def test_long_index(): with no_lingering_errors(): with test_tempdir('bup-tgit-') as tmpdir: os.environ['BUP_MAIN_EXE'] = bup_exe os.environ['BUP_DIR'] = bupdir = tmpdir + "/bup" git.init_repo(bupdir) w = git.PackWriter() obj_bin = struct.pack('!IIIII', 0x00112233, 0x44556677, 0x88990011, 0x22334455, 0x66778899) obj2_bin = struct.pack('!IIIII', 0x11223344, 0x55667788, 0x99001122, 0x33445566, 0x77889900) obj3_bin = struct.pack('!IIIII', 0x22334455, 0x66778899, 0x00112233, 0x44556677, 0x88990011) pack_bin = struct.pack('!IIIII', 0x99887766, 0x55443322, 0x11009988, 0x77665544, 0x33221100) idx = list(list() for i in xrange(256)) idx[0].append((obj_bin, 1, 0xfffffffff)) idx[0x11].append((obj2_bin, 2, 0xffffffffff)) idx[0x22].append((obj3_bin, 3, 0xff)) w.count = 3 name = tmpdir + '/tmp.idx' r = w._write_pack_idx_v2(name, idx, pack_bin) i = git.PackIdxV2(name, open(name, 'rb')) WVPASSEQ(i.find_offset(obj_bin), 0xfffffffff) WVPASSEQ(i.find_offset(obj2_bin), 0xffffffffff) WVPASSEQ(i.find_offset(obj3_bin), 0xff) @wvtest def test_check_repo_or_die(): with no_lingering_errors(): with test_tempdir('bup-tgit-') as tmpdir: os.environ['BUP_DIR'] = bupdir = tmpdir + "/bup" orig_cwd = os.getcwd() try: os.chdir(tmpdir) git.init_repo(bupdir) git.check_repo_or_die() # if we reach this point the call above passed WVPASS('check_repo_or_die') os.rename(bupdir + '/objects/pack', bupdir + '/objects/pack.tmp') open(bupdir + '/objects/pack', 'w').close() try: git.check_repo_or_die() except SystemExit as e: WVPASSEQ(e.code, 14) else: WVFAIL() os.unlink(bupdir + '/objects/pack') os.rename(bupdir + '/objects/pack.tmp', bupdir + '/objects/pack') try: git.check_repo_or_die('nonexistantbup.tmp') except SystemExit as e: WVPASSEQ(e.code, 15) else: WVFAIL() finally: os.chdir(orig_cwd) @wvtest def test_commit_parsing(): def restore_env_var(name, val): if val is None: del os.environ[name] else: os.environ[name] = val def showval(commit, val): return readpipe(['git', 'show', '-s', '--pretty=format:%s' % val, commit]).strip() with no_lingering_errors(): with test_tempdir('bup-tgit-') as tmpdir: orig_cwd = os.getcwd() workdir = tmpdir + "/work" repodir = workdir + '/.git' orig_author_name = os.environ.get('GIT_AUTHOR_NAME') orig_author_email = os.environ.get('GIT_AUTHOR_EMAIL') orig_committer_name = os.environ.get('GIT_COMMITTER_NAME') orig_committer_email = os.environ.get('GIT_COMMITTER_EMAIL') os.environ['GIT_AUTHOR_NAME'] = 'bup test' os.environ['GIT_COMMITTER_NAME'] = os.environ['GIT_AUTHOR_NAME'] os.environ['GIT_AUTHOR_EMAIL'] = 'bup@a425bc70a02811e49bdf73ee56450e6f' os.environ['GIT_COMMITTER_EMAIL'] = os.environ['GIT_AUTHOR_EMAIL'] try: readpipe(['git', 'init', workdir]) os.environ['GIT_DIR'] = os.environ['BUP_DIR'] = repodir git.check_repo_or_die(repodir) os.chdir(workdir) with open('foo', 'w') as f: print >> f, 'bar' readpipe(['git', 'add', '.']) readpipe(['git', 'commit', '-am', 'Do something', '--author', 'Someone ', '--date', 'Sat Oct 3 19:48:49 2009 -0400']) commit = readpipe(['git', 'show-ref', '-s', 'master']).strip() parents = showval(commit, '%P') tree = showval(commit, '%T') cname = showval(commit, '%cn') cmail = showval(commit, '%ce') cdate = showval(commit, '%ct') coffs = showval(commit, '%ci') coffs = coffs[-5:] coff = (int(coffs[-4:-2]) * 60 * 60) + (int(coffs[-2:]) * 60) if coffs[-5] == '-': coff = - coff commit_items = git.get_commit_items(commit, git.cp()) WVPASSEQ(commit_items.parents, []) WVPASSEQ(commit_items.tree, tree) WVPASSEQ(commit_items.author_name, 'Someone') WVPASSEQ(commit_items.author_mail, 'someone@somewhere') WVPASSEQ(commit_items.author_sec, 1254613729) WVPASSEQ(commit_items.author_offset, -(4 * 60 * 60)) WVPASSEQ(commit_items.committer_name, cname) WVPASSEQ(commit_items.committer_mail, cmail) WVPASSEQ(commit_items.committer_sec, int(cdate)) WVPASSEQ(commit_items.committer_offset, coff) WVPASSEQ(commit_items.message, 'Do something\n') with open('bar', 'w') as f: print >> f, 'baz' readpipe(['git', 'add', '.']) readpipe(['git', 'commit', '-am', 'Do something else']) child = readpipe(['git', 'show-ref', '-s', 'master']).strip() parents = showval(child, '%P') commit_items = git.get_commit_items(child, git.cp()) WVPASSEQ(commit_items.parents, [commit]) finally: os.chdir(orig_cwd) restore_env_var('GIT_AUTHOR_NAME', orig_author_name) restore_env_var('GIT_AUTHOR_EMAIL', orig_author_email) restore_env_var('GIT_COMMITTER_NAME', orig_committer_name) restore_env_var('GIT_COMMITTER_EMAIL', orig_committer_email) @wvtest def test_new_commit(): with no_lingering_errors(): with test_tempdir('bup-tgit-') as tmpdir: os.environ['BUP_MAIN_EXE'] = bup_exe os.environ['BUP_DIR'] = bupdir = tmpdir + "/bup" git.init_repo(bupdir) git.verbose = 1 w = git.PackWriter() tree = os.urandom(20) parent = os.urandom(20) author_name = 'Author' author_mail = 'author@somewhere' adate_sec = 1439657836 cdate_sec = adate_sec + 1 committer_name = 'Committer' committer_mail = 'committer@somewhere' adate_tz_sec = cdate_tz_sec = None commit = w.new_commit(tree, parent, '%s <%s>' % (author_name, author_mail), adate_sec, adate_tz_sec, '%s <%s>' % (committer_name, committer_mail), cdate_sec, cdate_tz_sec, 'There is a small mailbox here') adate_tz_sec = -60 * 60 cdate_tz_sec = 120 * 60 commit_off = w.new_commit(tree, parent, '%s <%s>' % (author_name, author_mail), adate_sec, adate_tz_sec, '%s <%s>' % (committer_name, committer_mail), cdate_sec, cdate_tz_sec, 'There is a small mailbox here') w.close() commit_items = git.get_commit_items(commit.encode('hex'), git.cp()) local_author_offset = localtime(adate_sec).tm_gmtoff local_committer_offset = localtime(cdate_sec).tm_gmtoff WVPASSEQ(tree, commit_items.tree.decode('hex')) WVPASSEQ(1, len(commit_items.parents)) WVPASSEQ(parent, commit_items.parents[0].decode('hex')) WVPASSEQ(author_name, commit_items.author_name) WVPASSEQ(author_mail, commit_items.author_mail) WVPASSEQ(adate_sec, commit_items.author_sec) WVPASSEQ(local_author_offset, commit_items.author_offset) WVPASSEQ(committer_name, commit_items.committer_name) WVPASSEQ(committer_mail, commit_items.committer_mail) WVPASSEQ(cdate_sec, commit_items.committer_sec) WVPASSEQ(local_committer_offset, commit_items.committer_offset) commit_items = git.get_commit_items(commit_off.encode('hex'), git.cp()) WVPASSEQ(tree, commit_items.tree.decode('hex')) WVPASSEQ(1, len(commit_items.parents)) WVPASSEQ(parent, commit_items.parents[0].decode('hex')) WVPASSEQ(author_name, commit_items.author_name) WVPASSEQ(author_mail, commit_items.author_mail) WVPASSEQ(adate_sec, commit_items.author_sec) WVPASSEQ(adate_tz_sec, commit_items.author_offset) WVPASSEQ(committer_name, commit_items.committer_name) WVPASSEQ(committer_mail, commit_items.committer_mail) WVPASSEQ(cdate_sec, commit_items.committer_sec) WVPASSEQ(cdate_tz_sec, commit_items.committer_offset) @wvtest def test_list_refs(): with no_lingering_errors(): with test_tempdir('bup-tgit-') as tmpdir: os.environ['BUP_MAIN_EXE'] = bup_exe os.environ['BUP_DIR'] = bupdir = tmpdir + "/bup" src = tmpdir + '/src' mkdirp(src) with open(src + '/1', 'w+') as f: print f, 'something' with open(src + '/2', 'w+') as f: print f, 'something else' git.init_repo(bupdir) emptyset = frozenset() WVPASSEQ(frozenset(git.list_refs()), emptyset) WVPASSEQ(frozenset(git.list_refs(limit_to_tags=True)), emptyset) WVPASSEQ(frozenset(git.list_refs(limit_to_heads=True)), emptyset) exc(bup_exe, 'index', src) exc(bup_exe, 'save', '-n', 'src', '--strip', src) src_hash = exo('git', '--git-dir', bupdir, 'rev-parse', 'src').strip().split('\n') assert(len(src_hash) == 1) src_hash = src_hash[0].decode('hex') tree_hash = exo('git', '--git-dir', bupdir, 'rev-parse', 'src:').strip().split('\n')[0].decode('hex') blob_hash = exo('git', '--git-dir', bupdir, 'rev-parse', 'src:1').strip().split('\n')[0].decode('hex') WVPASSEQ(frozenset(git.list_refs()), frozenset([('refs/heads/src', src_hash)])) WVPASSEQ(frozenset(git.list_refs(limit_to_tags=True)), emptyset) WVPASSEQ(frozenset(git.list_refs(limit_to_heads=True)), frozenset([('refs/heads/src', src_hash)])) exc('git', '--git-dir', bupdir, 'tag', 'commit-tag', 'src') WVPASSEQ(frozenset(git.list_refs()), frozenset([('refs/heads/src', src_hash), ('refs/tags/commit-tag', src_hash)])) WVPASSEQ(frozenset(git.list_refs(limit_to_tags=True)), frozenset([('refs/tags/commit-tag', src_hash)])) WVPASSEQ(frozenset(git.list_refs(limit_to_heads=True)), frozenset([('refs/heads/src', src_hash)])) exc('git', '--git-dir', bupdir, 'tag', 'tree-tag', 'src:') exc('git', '--git-dir', bupdir, 'tag', 'blob-tag', 'src:1') os.unlink(bupdir + '/refs/heads/src') expected_tags = frozenset([('refs/tags/commit-tag', src_hash), ('refs/tags/tree-tag', tree_hash), ('refs/tags/blob-tag', blob_hash)]) WVPASSEQ(frozenset(git.list_refs()), expected_tags) WVPASSEQ(frozenset(git.list_refs(limit_to_heads=True)), frozenset([])) WVPASSEQ(frozenset(git.list_refs(limit_to_tags=True)), expected_tags) def test__git_date_str(): with no_lingering_errors(): WVPASSEQ('0 +0000', git._git_date_str(0, 0)) WVPASSEQ('0 -0130', git._git_date_str(0, -90 * 60)) WVPASSEQ('0 +0130', git._git_date_str(0, 90 * 60)) bup-0.29/lib/bup/t/thashsplit.py000066400000000000000000000122021303127641400165640ustar00rootroot00000000000000from io import BytesIO from wvtest import * from bup import hashsplit, _helpers, helpers from buptest import no_lingering_errors def nr_regions(x, max_count=None): return list(hashsplit._nonresident_page_regions(bytearray(x), 1, max_count)) @wvtest def test_nonresident_page_regions(): with no_lingering_errors(): WVPASSEQ(nr_regions([]), []) WVPASSEQ(nr_regions([1]), []) WVPASSEQ(nr_regions([0]), [(0, 1)]) WVPASSEQ(nr_regions([1, 0]), [(1, 1)]) WVPASSEQ(nr_regions([0, 0]), [(0, 2)]) WVPASSEQ(nr_regions([1, 0, 1]), [(1, 1)]) WVPASSEQ(nr_regions([1, 0, 0]), [(1, 2)]) WVPASSEQ(nr_regions([0, 1, 0]), [(0, 1), (2, 1)]) WVPASSEQ(nr_regions([0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0]), [(0, 2), (5, 3), (9, 2)]) WVPASSEQ(nr_regions([2, 42, 3, 101]), [(0, 2)]) # Test limit WVPASSEQ(nr_regions([0, 0, 0], None), [(0, 3)]) WVPASSEQ(nr_regions([0, 0, 0], 1), [(0, 1), (1, 1), (2, 1)]) WVPASSEQ(nr_regions([0, 0, 0], 2), [(0, 2), (2, 1)]) WVPASSEQ(nr_regions([0, 0, 0], 3), [(0, 3)]) WVPASSEQ(nr_regions([0, 0, 0], 4), [(0, 3)]) WVPASSEQ(nr_regions([0, 0, 1], None), [(0, 2)]) WVPASSEQ(nr_regions([0, 0, 1], 1), [(0, 1), (1, 1)]) WVPASSEQ(nr_regions([0, 0, 1], 2), [(0, 2)]) WVPASSEQ(nr_regions([0, 0, 1], 3), [(0, 2)]) WVPASSEQ(nr_regions([1, 0, 0], None), [(1, 2)]) WVPASSEQ(nr_regions([1, 0, 0], 1), [(1, 1), (2, 1)]) WVPASSEQ(nr_regions([1, 0, 0], 2), [(1, 2)]) WVPASSEQ(nr_regions([1, 0, 0], 3), [(1, 2)]) WVPASSEQ(nr_regions([1, 0, 0, 0, 1], None), [(1, 3)]) WVPASSEQ(nr_regions([1, 0, 0, 0, 1], 1), [(1, 1), (2, 1), (3, 1)]) WVPASSEQ(nr_regions([1, 0, 0, 0, 1], 2), [(1, 2), (3, 1)]) WVPASSEQ(nr_regions([1, 0, 0, 0, 1], 3), [(1, 3)]) WVPASSEQ(nr_regions([1, 0, 0, 0, 1], 4), [(1, 3)]) @wvtest def test_uncache_ours_upto(): history = [] def mock_fadvise_pages_done(f, ofs, len): history.append((f, ofs, len)) with no_lingering_errors(): uncache_upto = hashsplit._uncache_ours_upto page_size = helpers.sc_page_size orig_pages_done = hashsplit._fadvise_pages_done try: hashsplit._fadvise_pages_done = mock_fadvise_pages_done history = [] uncache_upto(42, 0, (0, 1), iter([])) WVPASSEQ([], history) uncache_upto(42, page_size, (0, 1), iter([])) WVPASSEQ([(42, 0, 1)], history) history = [] uncache_upto(42, page_size, (0, 3), iter([(5, 2)])) WVPASSEQ([], history) uncache_upto(42, 2 * page_size, (0, 3), iter([(5, 2)])) WVPASSEQ([], history) uncache_upto(42, 3 * page_size, (0, 3), iter([(5, 2)])) WVPASSEQ([(42, 0, 3)], history) history = [] uncache_upto(42, 5 * page_size, (0, 3), iter([(5, 2)])) WVPASSEQ([(42, 0, 3)], history) history = [] uncache_upto(42, 6 * page_size, (0, 3), iter([(5, 2)])) WVPASSEQ([(42, 0, 3)], history) history = [] uncache_upto(42, 7 * page_size, (0, 3), iter([(5, 2)])) WVPASSEQ([(42, 0, 3), (42, 5, 2)], history) finally: hashsplit._fadvise_pages_done = orig_pages_done @wvtest def test_rolling_sums(): with no_lingering_errors(): WVPASS(_helpers.selftest()) @wvtest def test_fanout_behaviour(): # Drop in replacement for bupsplit, but splitting if the int value of a # byte >= BUP_BLOBBITS basebits = _helpers.blobbits() def splitbuf(buf): ofs = 0 for c in buf: ofs += 1 if ord(c) >= basebits: return ofs, ord(c) return 0, 0 with no_lingering_errors(): old_splitbuf = _helpers.splitbuf _helpers.splitbuf = splitbuf old_BLOB_MAX = hashsplit.BLOB_MAX hashsplit.BLOB_MAX = 4 old_BLOB_READ_SIZE = hashsplit.BLOB_READ_SIZE hashsplit.BLOB_READ_SIZE = 10 old_fanout = hashsplit.fanout hashsplit.fanout = 2 levels = lambda f: [(len(b), l) for b, l in hashsplit.hashsplit_iter([f], True, None)] # Return a string of n null bytes z = lambda n: '\x00' * n # Return a byte which will be split with a level of n sb = lambda n: chr(basebits + n) split_never = BytesIO(z(16)) split_first = BytesIO(z(1) + sb(3) + z(14)) split_end = BytesIO(z(13) + sb(1) + z(2)) split_many = BytesIO(sb(1) + z(3) + sb(2) + z(4) + sb(0) + z(4) + sb(5) + z(1)) WVPASSEQ(levels(split_never), [(4, 0), (4, 0), (4, 0), (4, 0)]) WVPASSEQ(levels(split_first), [(2, 3), (4, 0), (4, 0), (4, 0), (2, 0)]) WVPASSEQ(levels(split_end), [(4, 0), (4, 0), (4, 0), (2, 1), (2, 0)]) WVPASSEQ(levels(split_many), [(1, 1), (4, 2), (4, 0), (1, 0), (4, 0), (1, 5), (1, 0)]) _helpers.splitbuf = old_splitbuf hashsplit.BLOB_MAX = old_BLOB_MAX hashsplit.BLOB_READ_SIZE = old_BLOB_READ_SIZE hashsplit.fanout = old_fanout bup-0.29/lib/bup/t/thelpers.py000066400000000000000000000213741303127641400162410ustar00rootroot00000000000000 import helpers, math, os, os.path, stat, subprocess from wvtest import * from bup.helpers import (atomically_replaced_file, batchpipe, detect_fakeroot, grafted_path_components, mkdirp, parse_num, path_components, readpipe, stripped_path_components, utc_offset_str) from buptest import no_lingering_errors, test_tempdir import bup._helpers as _helpers bup_tmp = os.path.realpath('../../../t/tmp') mkdirp(bup_tmp) @wvtest def test_next(): with no_lingering_errors(): # Test whatever you end up with for next() after import '*'. WVPASSEQ(next(iter([]), None), None) x = iter([1]) WVPASSEQ(next(x, None), 1) WVPASSEQ(next(x, None), None) x = iter([1]) WVPASSEQ(next(x, 'x'), 1) WVPASSEQ(next(x, 'x'), 'x') WVEXCEPT(StopIteration, next, iter([])) x = iter([1]) WVPASSEQ(next(x), 1) WVEXCEPT(StopIteration, next, x) @wvtest def test_fallback_next(): with no_lingering_errors(): global next orig = next next = helpers._fallback_next try: test_next() finally: next = orig @wvtest def test_parse_num(): with no_lingering_errors(): pn = parse_num WVPASSEQ(pn('1'), 1) WVPASSEQ(pn('0'), 0) WVPASSEQ(pn('1.5k'), 1536) WVPASSEQ(pn('2 gb'), 2*1024*1024*1024) WVPASSEQ(pn('1e+9 k'), 1000000000 * 1024) WVPASSEQ(pn('-3e-3mb'), int(-0.003 * 1024 * 1024)) @wvtest def test_detect_fakeroot(): with no_lingering_errors(): if os.getenv('FAKEROOTKEY'): WVPASS(detect_fakeroot()) else: WVPASS(not detect_fakeroot()) @wvtest def test_path_components(): with no_lingering_errors(): WVPASSEQ(path_components('/'), [('', '/')]) WVPASSEQ(path_components('/foo'), [('', '/'), ('foo', '/foo')]) WVPASSEQ(path_components('/foo/'), [('', '/'), ('foo', '/foo')]) WVPASSEQ(path_components('/foo/bar'), [('', '/'), ('foo', '/foo'), ('bar', '/foo/bar')]) WVEXCEPT(Exception, path_components, 'foo') @wvtest def test_stripped_path_components(): with no_lingering_errors(): WVPASSEQ(stripped_path_components('/', []), [('', '/')]) WVPASSEQ(stripped_path_components('/', ['']), [('', '/')]) WVPASSEQ(stripped_path_components('/', ['/']), [('', '/')]) WVPASSEQ(stripped_path_components('/foo', ['/']), [('', '/'), ('foo', '/foo')]) WVPASSEQ(stripped_path_components('/', ['/foo']), [('', '/')]) WVPASSEQ(stripped_path_components('/foo', ['/bar']), [('', '/'), ('foo', '/foo')]) WVPASSEQ(stripped_path_components('/foo', ['/foo']), [('', '/foo')]) WVPASSEQ(stripped_path_components('/foo/bar', ['/foo']), [('', '/foo'), ('bar', '/foo/bar')]) WVPASSEQ(stripped_path_components('/foo/bar', ['/bar', '/foo', '/baz']), [('', '/foo'), ('bar', '/foo/bar')]) WVPASSEQ(stripped_path_components('/foo/bar/baz', ['/foo/bar/baz']), [('', '/foo/bar/baz')]) WVEXCEPT(Exception, stripped_path_components, 'foo', []) @wvtest def test_grafted_path_components(): with no_lingering_errors(): WVPASSEQ(grafted_path_components([('/chroot', '/')], '/foo'), [('', '/'), ('foo', '/foo')]) WVPASSEQ(grafted_path_components([('/foo/bar', '/')], '/foo/bar/baz/bax'), [('', '/foo/bar'), ('baz', '/foo/bar/baz'), ('bax', '/foo/bar/baz/bax')]) WVPASSEQ(grafted_path_components([('/foo/bar/baz', '/bax')], '/foo/bar/baz/1/2'), [('', None), ('bax', '/foo/bar/baz'), ('1', '/foo/bar/baz/1'), ('2', '/foo/bar/baz/1/2')]) WVPASSEQ(grafted_path_components([('/foo', '/bar/baz/bax')], '/foo/bar'), [('', None), ('bar', None), ('baz', None), ('bax', '/foo'), ('bar', '/foo/bar')]) WVPASSEQ(grafted_path_components([('/foo/bar/baz', '/a/b/c')], '/foo/bar/baz'), [('', None), ('a', None), ('b', None), ('c', '/foo/bar/baz')]) WVPASSEQ(grafted_path_components([('/', '/a/b/c/')], '/foo/bar'), [('', None), ('a', None), ('b', None), ('c', '/'), ('foo', '/foo'), ('bar', '/foo/bar')]) WVEXCEPT(Exception, grafted_path_components, 'foo', []) @wvtest def test_readpipe(): with no_lingering_errors(): x = readpipe(['echo', '42']) WVPASSEQ(x, '42\n') try: readpipe(['bash', '-c', 'exit 42']) except Exception as ex: WVPASSEQ(str(ex), "subprocess 'bash -c exit 42' failed with status 42") @wvtest def test_batchpipe(): with no_lingering_errors(): for chunk in batchpipe(['echo'], []): WVPASS(False) out = '' for chunk in batchpipe(['echo'], ['42']): out += chunk WVPASSEQ(out, '42\n') try: batchpipe(['bash', '-c'], ['exit 42']) except Exception as ex: WVPASSEQ(str(ex), "subprocess 'bash -c exit 42' failed with status 42") args = [str(x) for x in range(6)] # Force batchpipe to break the args into batches of 3. This # approach assumes all args are the same length. arg_max = \ helpers._argmax_base(['echo']) + helpers._argmax_args_size(args[:3]) batches = batchpipe(['echo'], args, arg_max=arg_max) WVPASSEQ(next(batches), '0 1 2\n') WVPASSEQ(next(batches), '3 4 5\n') WVPASSEQ(next(batches, None), None) batches = batchpipe(['echo'], [str(x) for x in range(5)], arg_max=arg_max) WVPASSEQ(next(batches), '0 1 2\n') WVPASSEQ(next(batches), '3 4\n') WVPASSEQ(next(batches, None), None) @wvtest def test_atomically_replaced_file(): with no_lingering_errors(): with test_tempdir('bup-thelper-') as tmpdir: target_file = os.path.join(tmpdir, 'test-atomic-write') with atomically_replaced_file(target_file, mode='w') as f: f.write('asdf') WVPASSEQ(f.mode, 'w') f = open(target_file, 'r') WVPASSEQ(f.read(), 'asdf') try: with atomically_replaced_file(target_file, mode='w') as f: f.write('wxyz') raise Exception() except: pass with open(target_file) as f: WVPASSEQ(f.read(), 'asdf') with atomically_replaced_file(target_file, mode='wb') as f: f.write(os.urandom(20)) WVPASSEQ(f.mode, 'wb') @wvtest def test_utc_offset_str(): with no_lingering_errors(): tz = os.environ.get('TZ') try: os.environ['TZ'] = 'FOO+0:00' WVPASSEQ(utc_offset_str(0), '+0000') os.environ['TZ'] = 'FOO+1:00' WVPASSEQ(utc_offset_str(0), '-0100') os.environ['TZ'] = 'FOO-1:00' WVPASSEQ(utc_offset_str(0), '+0100') os.environ['TZ'] = 'FOO+3:3' WVPASSEQ(utc_offset_str(0), '-0303') os.environ['TZ'] = 'FOO-3:3' WVPASSEQ(utc_offset_str(0), '+0303') # Offset is not an integer number of minutes os.environ['TZ'] = 'FOO+3:3:3' WVPASSEQ(utc_offset_str(1), '-0303') os.environ['TZ'] = 'FOO-3:3:3' WVPASSEQ(utc_offset_str(1), '+0303') WVPASSEQ(utc_offset_str(314159), '+0303') finally: if tz: os.environ['TZ'] = tz else: try: del os.environ['TZ'] except KeyError: pass @wvtest def test_valid_save_name(): with no_lingering_errors(): valid = helpers.valid_save_name WVPASS(valid('x')) WVPASS(valid('x@')) WVFAIL(valid('@')) WVFAIL(valid('/')) WVFAIL(valid('/foo')) WVFAIL(valid('foo/')) WVFAIL(valid('/foo/')) WVFAIL(valid('foo//bar')) WVFAIL(valid('.')) WVFAIL(valid('bar.')) WVFAIL(valid('foo@{')) for x in ' ~^:?*[\\': WVFAIL(valid('foo' + x)) for i in range(20): WVFAIL(valid('foo' + chr(i))) WVFAIL(valid('foo' + chr(0x7f))) WVFAIL(valid('foo..bar')) WVFAIL(valid('bar.lock/baz')) WVFAIL(valid('foo/bar.lock/baz')) WVFAIL(valid('.bar/baz')) WVFAIL(valid('foo/.bar/baz')) bup-0.29/lib/bup/t/tindex.py000066400000000000000000000146011303127641400157010ustar00rootroot00000000000000 import os, time from wvtest import * from bup import index, metadata from bup.helpers import mkdirp, resolve_parent from buptest import no_lingering_errors, test_tempdir import bup.xstat as xstat lib_t_dir = os.path.dirname(__file__) @wvtest def index_basic(): with no_lingering_errors(): cd = os.path.realpath('../../../t') WVPASS(cd) sd = os.path.realpath(cd + '/sampledata') WVPASSEQ(resolve_parent(cd + '/sampledata'), sd) WVPASSEQ(os.path.realpath(cd + '/sampledata/x'), sd + '/x') WVPASSEQ(os.path.realpath(cd + '/sampledata/var/abs-symlink'), sd + '/var/abs-symlink-target') WVPASSEQ(resolve_parent(cd + '/sampledata/var/abs-symlink'), sd + '/var/abs-symlink') @wvtest def index_writer(): with no_lingering_errors(): with test_tempdir('bup-tindex-') as tmpdir: orig_cwd = os.getcwd() try: os.chdir(tmpdir) ds = xstat.stat('.') fs = xstat.stat(lib_t_dir + '/tindex.py') ms = index.MetaStoreWriter('index.meta.tmp'); tmax = (time.time() - 1) * 10**9 w = index.Writer('index.tmp', ms, tmax) w.add('/var/tmp/sporky', fs, 0) w.add('/etc/passwd', fs, 0) w.add('/etc/', ds, 0) w.add('/', ds, 0) ms.close() w.close() finally: os.chdir(orig_cwd) def dump(m): for e in list(m): print '%s%s %s' % (e.is_valid() and ' ' or 'M', e.is_fake() and 'F' or ' ', e.name) def fake_validate(*l): for i in l: for e in i: e.validate(0100644, index.FAKE_SHA) e.repack() def eget(l, ename): for e in l: if e.name == ename: return e @wvtest def index_negative_timestamps(): with no_lingering_errors(): with test_tempdir('bup-tindex-') as tmpdir: # Makes 'foo' exist foopath = tmpdir + '/foo' f = file(foopath, 'wb') f.close() # Dec 31, 1969 os.utime(foopath, (-86400, -86400)) ns_per_sec = 10**9 tmax = (time.time() - 1) * ns_per_sec e = index.BlankNewEntry(foopath, 0, tmax) e.update_from_stat(xstat.stat(foopath), 0) WVPASS(e.packed()) # Jun 10, 1893 os.utime(foopath, (-0x80000000, -0x80000000)) e = index.BlankNewEntry(foopath, 0, tmax) e.update_from_stat(xstat.stat(foopath), 0) WVPASS(e.packed()) @wvtest def index_dirty(): with no_lingering_errors(): with test_tempdir('bup-tindex-') as tmpdir: orig_cwd = os.getcwd() try: os.chdir(tmpdir) default_meta = metadata.Metadata() ms1 = index.MetaStoreWriter('index.meta.tmp') ms2 = index.MetaStoreWriter('index2.meta.tmp') ms3 = index.MetaStoreWriter('index3.meta.tmp') meta_ofs1 = ms1.store(default_meta) meta_ofs2 = ms2.store(default_meta) meta_ofs3 = ms3.store(default_meta) ds = xstat.stat(lib_t_dir) fs = xstat.stat(lib_t_dir + '/tindex.py') tmax = (time.time() - 1) * 10**9 w1 = index.Writer('index.tmp', ms1, tmax) w1.add('/a/b/x', fs, meta_ofs1) w1.add('/a/b/c', fs, meta_ofs1) w1.add('/a/b/', ds, meta_ofs1) w1.add('/a/', ds, meta_ofs1) #w1.close() WVPASS() w2 = index.Writer('index2.tmp', ms2, tmax) w2.add('/a/b/n/2', fs, meta_ofs2) #w2.close() WVPASS() w3 = index.Writer('index3.tmp', ms3, tmax) w3.add('/a/c/n/3', fs, meta_ofs3) #w3.close() WVPASS() r1 = w1.new_reader() r2 = w2.new_reader() r3 = w3.new_reader() WVPASS() r1all = [e.name for e in r1] WVPASSEQ(r1all, ['/a/b/x', '/a/b/c', '/a/b/', '/a/', '/']) r2all = [e.name for e in r2] WVPASSEQ(r2all, ['/a/b/n/2', '/a/b/n/', '/a/b/', '/a/', '/']) r3all = [e.name for e in r3] WVPASSEQ(r3all, ['/a/c/n/3', '/a/c/n/', '/a/c/', '/a/', '/']) all = [e.name for e in index.merge(r2, r1, r3)] WVPASSEQ(all, ['/a/c/n/3', '/a/c/n/', '/a/c/', '/a/b/x', '/a/b/n/2', '/a/b/n/', '/a/b/c', '/a/b/', '/a/', '/']) fake_validate(r1) dump(r1) print [hex(e.flags) for e in r1] WVPASSEQ([e.name for e in r1 if e.is_valid()], r1all) WVPASSEQ([e.name for e in r1 if not e.is_valid()], []) WVPASSEQ([e.name for e in index.merge(r2, r1, r3) if not e.is_valid()], ['/a/c/n/3', '/a/c/n/', '/a/c/', '/a/b/n/2', '/a/b/n/', '/a/b/', '/a/', '/']) expect_invalid = ['/'] + r2all + r3all expect_real = (set(r1all) - set(r2all) - set(r3all)) \ | set(['/a/b/n/2', '/a/c/n/3']) dump(index.merge(r2, r1, r3)) for e in index.merge(r2, r1, r3): print e.name, hex(e.flags), e.ctime eiv = e.name in expect_invalid er = e.name in expect_real WVPASSEQ(eiv, not e.is_valid()) WVPASSEQ(er, e.is_real()) fake_validate(r2, r3) dump(index.merge(r2, r1, r3)) WVPASSEQ([e.name for e in index.merge(r2, r1, r3) if not e.is_valid()], []) e = eget(index.merge(r2, r1, r3), '/a/b/c') e.invalidate() e.repack() dump(index.merge(r2, r1, r3)) WVPASSEQ([e.name for e in index.merge(r2, r1, r3) if not e.is_valid()], ['/a/b/c', '/a/b/', '/a/', '/']) w1.close() w2.close() w3.close() finally: os.chdir(orig_cwd) bup-0.29/lib/bup/t/tmetadata.py000066400000000000000000000252321303127641400163540ustar00rootroot00000000000000 import errno, glob, grp, pwd, stat, tempfile, subprocess from wvtest import * from bup import git, metadata, vfs from bup.helpers import clear_errors, detect_fakeroot, is_superuser, resolve_parent from bup.xstat import utime, lutime from buptest import no_lingering_errors, test_tempdir import bup.helpers as helpers top_dir = '../../..' bup_tmp = os.path.realpath('../../../t/tmp') bup_path = top_dir + '/bup' start_dir = os.getcwd() def ex(*cmd): try: cmd_str = ' '.join(cmd) print >> sys.stderr, cmd_str rc = subprocess.call(cmd) if rc < 0: print >> sys.stderr, 'terminated by signal', - rc sys.exit(1) elif rc > 0: print >> sys.stderr, 'returned exit status', rc sys.exit(1) except OSError as e: print >> sys.stderr, 'subprocess call failed:', e sys.exit(1) def setup_testfs(): assert(sys.platform.startswith('linux')) # Set up testfs with user_xattr, etc. if subprocess.call(['modprobe', 'loop']) != 0: return False subprocess.call(['umount', 'testfs']) ex('dd', 'if=/dev/zero', 'of=testfs.img', 'bs=1M', 'count=32') ex('mke2fs', '-F', '-j', '-m', '0', 'testfs.img') ex('rm', '-rf', 'testfs') os.mkdir('testfs') ex('mount', '-o', 'loop,acl,user_xattr', 'testfs.img', 'testfs') # Hide, so that tests can't create risks. os.chown('testfs', 0, 0) os.chmod('testfs', 0o700) return True def cleanup_testfs(): subprocess.call(['umount', 'testfs']) helpers.unlink('testfs.img') @wvtest def test_clean_up_archive_path(): with no_lingering_errors(): cleanup = metadata._clean_up_path_for_archive WVPASSEQ(cleanup('foo'), 'foo') WVPASSEQ(cleanup('/foo'), 'foo') WVPASSEQ(cleanup('///foo'), 'foo') WVPASSEQ(cleanup('/foo/bar'), 'foo/bar') WVPASSEQ(cleanup('foo/./bar'), 'foo/bar') WVPASSEQ(cleanup('/foo/./bar'), 'foo/bar') WVPASSEQ(cleanup('/foo/./bar/././baz'), 'foo/bar/baz') WVPASSEQ(cleanup('/foo/./bar///././baz'), 'foo/bar/baz') WVPASSEQ(cleanup('//./foo/./bar///././baz/.///'), 'foo/bar/baz/') WVPASSEQ(cleanup('./foo/./.bar'), 'foo/.bar') WVPASSEQ(cleanup('./foo/.'), 'foo') WVPASSEQ(cleanup('./foo/..'), '.') WVPASSEQ(cleanup('//./..//.../..//.'), '.') WVPASSEQ(cleanup('//./..//..././/.'), '...') WVPASSEQ(cleanup('/////.'), '.') WVPASSEQ(cleanup('/../'), '.') WVPASSEQ(cleanup(''), '.') @wvtest def test_risky_path(): with no_lingering_errors(): risky = metadata._risky_path WVPASS(risky('/foo')) WVPASS(risky('///foo')) WVPASS(risky('/../foo')) WVPASS(risky('../foo')) WVPASS(risky('foo/..')) WVPASS(risky('foo/../')) WVPASS(risky('foo/../bar')) WVFAIL(risky('foo')) WVFAIL(risky('foo/')) WVFAIL(risky('foo///')) WVFAIL(risky('./foo')) WVFAIL(risky('foo/.')) WVFAIL(risky('./foo/.')) WVFAIL(risky('foo/bar')) WVFAIL(risky('foo/./bar')) @wvtest def test_clean_up_extract_path(): with no_lingering_errors(): cleanup = metadata._clean_up_extract_path WVPASSEQ(cleanup('/foo'), 'foo') WVPASSEQ(cleanup('///foo'), 'foo') WVFAIL(cleanup('/../foo')) WVFAIL(cleanup('../foo')) WVFAIL(cleanup('foo/..')) WVFAIL(cleanup('foo/../')) WVFAIL(cleanup('foo/../bar')) WVPASSEQ(cleanup('foo'), 'foo') WVPASSEQ(cleanup('foo/'), 'foo/') WVPASSEQ(cleanup('foo///'), 'foo///') WVPASSEQ(cleanup('./foo'), './foo') WVPASSEQ(cleanup('foo/.'), 'foo/.') WVPASSEQ(cleanup('./foo/.'), './foo/.') WVPASSEQ(cleanup('foo/bar'), 'foo/bar') WVPASSEQ(cleanup('foo/./bar'), 'foo/./bar') WVPASSEQ(cleanup('/'), '.') WVPASSEQ(cleanup('./'), './') WVPASSEQ(cleanup('///foo/bar'), 'foo/bar') WVPASSEQ(cleanup('///foo/bar'), 'foo/bar') @wvtest def test_metadata_method(): with no_lingering_errors(): with test_tempdir('bup-tmetadata-') as tmpdir: bup_dir = tmpdir + '/bup' data_path = tmpdir + '/foo' os.mkdir(data_path) ex('touch', data_path + '/file') ex('ln', '-s', 'file', data_path + '/symlink') test_time1 = 13 * 1000000000 test_time2 = 42 * 1000000000 utime(data_path + '/file', (0, test_time1)) lutime(data_path + '/symlink', (0, 0)) utime(data_path, (0, test_time2)) ex(bup_path, '-d', bup_dir, 'init') ex(bup_path, '-d', bup_dir, 'index', '-v', data_path) ex(bup_path, '-d', bup_dir, 'save', '-tvvn', 'test', data_path) git.check_repo_or_die(bup_dir) top = vfs.RefList(None) n = top.lresolve('/test/latest' + resolve_parent(data_path)) m = n.metadata() WVPASS(m.mtime == test_time2) WVPASS(len(n.subs()) == 2) WVPASS(n.name == 'foo') WVPASS(set([x.name for x in n.subs()]) == set(['file', 'symlink'])) for sub in n: if sub.name == 'file': m = sub.metadata() WVPASS(m.mtime == test_time1) elif sub.name == 'symlink': m = sub.metadata() WVPASS(m.mtime == 0) def _first_err(): if helpers.saved_errors: return str(helpers.saved_errors[0]) return '' @wvtest def test_from_path_error(): if is_superuser() or detect_fakeroot(): return with no_lingering_errors(): with test_tempdir('bup-tmetadata-') as tmpdir: path = tmpdir + '/foo' os.mkdir(path) m = metadata.from_path(path, archive_path=path, save_symlinks=True) WVPASSEQ(m.path, path) os.chmod(path, 000) metadata.from_path(path, archive_path=path, save_symlinks=True) if metadata.get_linux_file_attr: print >> sys.stderr, 'saved_errors:', helpers.saved_errors WVPASS(len(helpers.saved_errors) == 1) errmsg = _first_err() WVPASS(errmsg.startswith('read Linux attr')) clear_errors() def _linux_attr_supported(path): # Expects path to denote a regular file or a directory. if not metadata.get_linux_file_attr: return False try: metadata.get_linux_file_attr(path) except OSError as e: if e.errno in (errno.ENOTTY, errno.ENOSYS, errno.EOPNOTSUPP): return False else: raise return True @wvtest def test_apply_to_path_restricted_access(): if is_superuser() or detect_fakeroot(): return if sys.platform.startswith('cygwin'): return # chmod 000 isn't effective. with no_lingering_errors(): with test_tempdir('bup-tmetadata-') as tmpdir: parent = tmpdir + '/foo' path = parent + '/bar' os.mkdir(parent) os.mkdir(path) clear_errors() m = metadata.from_path(path, archive_path=path, save_symlinks=True) WVPASSEQ(m.path, path) os.chmod(parent, 000) m.apply_to_path(path) print >> sys.stderr, 'saved_errors:', helpers.saved_errors expected_errors = ['utime: '] if m.linux_attr and _linux_attr_supported(tmpdir): expected_errors.append('Linux chattr: ') if metadata.xattr and m.linux_xattr: expected_errors.append("xattr.set '") WVPASS(len(helpers.saved_errors) == len(expected_errors)) for i in xrange(len(expected_errors)): WVPASS(str(helpers.saved_errors[i]).startswith(expected_errors[i])) clear_errors() @wvtest def test_restore_over_existing_target(): with no_lingering_errors(): with test_tempdir('bup-tmetadata-') as tmpdir: path = tmpdir + '/foo' os.mkdir(path) dir_m = metadata.from_path(path, archive_path=path, save_symlinks=True) os.rmdir(path) open(path, 'w').close() file_m = metadata.from_path(path, archive_path=path, save_symlinks=True) # Restore dir over file. WVPASSEQ(dir_m.create_path(path, create_symlinks=True), None) WVPASS(stat.S_ISDIR(os.stat(path).st_mode)) # Restore dir over dir. WVPASSEQ(dir_m.create_path(path, create_symlinks=True), None) WVPASS(stat.S_ISDIR(os.stat(path).st_mode)) # Restore file over dir. WVPASSEQ(file_m.create_path(path, create_symlinks=True), None) WVPASS(stat.S_ISREG(os.stat(path).st_mode)) # Restore file over file. WVPASSEQ(file_m.create_path(path, create_symlinks=True), None) WVPASS(stat.S_ISREG(os.stat(path).st_mode)) # Restore file over non-empty dir. os.remove(path) os.mkdir(path) open(path + '/bar', 'w').close() WVEXCEPT(Exception, file_m.create_path, path, create_symlinks=True) # Restore dir over non-empty dir. os.remove(path + '/bar') os.mkdir(path + '/bar') WVEXCEPT(Exception, dir_m.create_path, path, create_symlinks=True) from bup.metadata import posix1e if not posix1e: @wvtest def POSIX1E_ACL_SUPPORT_IS_MISSING(): pass from bup.metadata import xattr if xattr: @wvtest def test_handling_of_incorrect_existing_linux_xattrs(): if not is_superuser() or detect_fakeroot(): WVMSG('skipping test -- not superuser') return if not setup_testfs(): WVMSG('unable to load loop module; skipping dependent tests') return for f in glob.glob('testfs/*'): ex('rm', '-rf', f) path = 'testfs/foo' open(path, 'w').close() xattr.set(path, 'foo', 'bar', namespace=xattr.NS_USER) m = metadata.from_path(path, archive_path=path, save_symlinks=True) xattr.set(path, 'baz', 'bax', namespace=xattr.NS_USER) m.apply_to_path(path, restore_numeric_ids=False) WVPASSEQ(xattr.list(path), ['user.foo']) WVPASSEQ(xattr.get(path, 'user.foo'), 'bar') xattr.set(path, 'foo', 'baz', namespace=xattr.NS_USER) m.apply_to_path(path, restore_numeric_ids=False) WVPASSEQ(xattr.list(path), ['user.foo']) WVPASSEQ(xattr.get(path, 'user.foo'), 'bar') xattr.remove(path, 'foo', namespace=xattr.NS_USER) m.apply_to_path(path, restore_numeric_ids=False) WVPASSEQ(xattr.list(path), ['user.foo']) WVPASSEQ(xattr.get(path, 'user.foo'), 'bar') os.chdir(start_dir) cleanup_testfs() bup-0.29/lib/bup/t/toptions.py000066400000000000000000000066001303127641400162650ustar00rootroot00000000000000 from bup import options from wvtest import * from buptest import no_lingering_errors @wvtest def test_optdict(): with no_lingering_errors(): d = options.OptDict({ 'x': ('x', False), 'y': ('y', False), 'z': ('z', False), 'other_thing': ('other_thing', False), 'no_other_thing': ('other_thing', True), 'no_z': ('z', True), 'no_smart': ('smart', True), 'smart': ('smart', False), 'stupid': ('smart', True), 'no_smart': ('smart', False), }) WVPASS('foo') d['x'] = 5 d['y'] = 4 d['z'] = 99 d['no_other_thing'] = 5 WVPASSEQ(d.x, 5) WVPASSEQ(d.y, 4) WVPASSEQ(d.z, 99) WVPASSEQ(d.no_z, False) WVPASSEQ(d.no_other_thing, True) WVEXCEPT(KeyError, lambda: d.p) invalid_optspec0 = """ """ invalid_optspec1 = """ prog """ invalid_optspec2 = """ -- x,y """ @wvtest def test_invalid_optspec(): with no_lingering_errors(): WVPASS(options.Options(invalid_optspec0).parse([])) WVPASS(options.Options(invalid_optspec1).parse([])) WVPASS(options.Options(invalid_optspec2).parse([])) optspec = """ prog [stuff...] prog [-t] -- t test q,quiet quiet l,longoption= long option with parameters and a really really long description that will require wrapping p= short option with parameters onlylong long option with no short neveropt never called options deftest1= a default option with default [1] deftest2= a default option with [1] default [2] deftest3= a default option with [3] no actual default deftest4= a default option with [[square]] deftest5= a default option with "correct" [[square] s,smart,no-stupid disable stupidity x,extended,no-simple extended mode [2] #,compress= set compression level [5] """ @wvtest def test_options(): with no_lingering_errors(): o = options.Options(optspec) (opt,flags,extra) = o.parse(['-tttqp', 7, '--longoption', '19', 'hanky', '--onlylong', '-7']) WVPASSEQ(flags[0], ('-t', '')) WVPASSEQ(flags[1], ('-t', '')) WVPASSEQ(flags[2], ('-t', '')) WVPASSEQ(flags[3], ('-q', '')) WVPASSEQ(flags[4], ('-p', 7)) WVPASSEQ(flags[5], ('--longoption', '19')) WVPASSEQ(extra, ['hanky']) WVPASSEQ((opt.t, opt.q, opt.p, opt.l, opt.onlylong, opt.neveropt), (3,1,7,19,1,None)) WVPASSEQ((opt.deftest1, opt.deftest2, opt.deftest3, opt.deftest4, opt.deftest5), (1,2,None,None,'[square')) WVPASSEQ((opt.stupid, opt.no_stupid), (True, None)) WVPASSEQ((opt.smart, opt.no_smart), (None, True)) WVPASSEQ((opt.x, opt.extended, opt.no_simple), (2,2,2)) WVPASSEQ((opt.no_x, opt.no_extended, opt.simple), (False,False,False)) WVPASSEQ(opt['#'], 7) WVPASSEQ(opt.compress, 7) (opt,flags,extra) = o.parse(['--onlylong', '-t', '--no-onlylong', '--smart', '--simple']) WVPASSEQ((opt.t, opt.q, opt.onlylong), (1, None, 0)) WVPASSEQ((opt.stupid, opt.no_stupid), (False, True)) WVPASSEQ((opt.smart, opt.no_smart), (True, False)) WVPASSEQ((opt.x, opt.extended, opt.no_simple), (0,0,0)) WVPASSEQ((opt.no_x, opt.no_extended, opt.simple), (True,True,True)) bup-0.29/lib/bup/t/tshquote.py000066400000000000000000000040461303127641400162640ustar00rootroot00000000000000 from wvtest import * from bup import shquote from buptest import no_lingering_errors def qst(line): return [word for offset,word in shquote.quotesplit(line)] @wvtest def test_shquote(): with no_lingering_errors(): WVPASSEQ(qst(""" this is basic \t\n\r text """), ['this', 'is', 'basic', 'text']) WVPASSEQ(qst(r""" \"x\" "help" 'yelp' """), ['"x"', 'help', 'yelp']) WVPASSEQ(qst(r""" "'\"\"'" '\"\'' """), ["'\"\"'", '\\"\'']) WVPASSEQ(shquote.quotesplit(' this is "unfinished'), [(2,'this'), (7,'is'), (10,'unfinished')]) WVPASSEQ(shquote.quotesplit('"silly"\'will'), [(0,'silly'), (7,'will')]) WVPASSEQ(shquote.unfinished_word('this is a "billy" "goat'), ('"', 'goat')) WVPASSEQ(shquote.unfinished_word("'x"), ("'", 'x')) WVPASSEQ(shquote.unfinished_word("abra cadabra "), (None, '')) WVPASSEQ(shquote.unfinished_word("abra cadabra"), (None, 'cadabra')) (qtype, word) = shquote.unfinished_word("this is /usr/loc") WVPASSEQ(shquote.what_to_add(qtype, word, "/usr/local", True), "al") (qtype, word) = shquote.unfinished_word("this is '/usr/loc") WVPASSEQ(shquote.what_to_add(qtype, word, "/usr/local", True), "al'") (qtype, word) = shquote.unfinished_word("this is \"/usr/loc") WVPASSEQ(shquote.what_to_add(qtype, word, "/usr/local", True), "al\"") (qtype, word) = shquote.unfinished_word("this is \"/usr/loc") WVPASSEQ(shquote.what_to_add(qtype, word, "/usr/local", False), "al") (qtype, word) = shquote.unfinished_word("this is \\ hammer\\ \"") WVPASSEQ(word, ' hammer "') WVPASSEQ(shquote.what_to_add(qtype, word, " hammer \"time\"", True), "time\\\"") WVPASSEQ(shquote.quotify_list(['a', '', '"word"', "'third'", "'", "x y"]), "a '' '\"word\"' \"'third'\" \"'\" 'x y'") bup-0.29/lib/bup/t/tvint.py000066400000000000000000000051151303127641400155520ustar00rootroot00000000000000from io import BytesIO from wvtest import * from bup import vint from buptest import no_lingering_errors def encode_and_decode_vuint(x): f = BytesIO() vint.write_vuint(f, x) return vint.read_vuint(BytesIO(f.getvalue())) @wvtest def test_vuint(): with no_lingering_errors(): for x in (0, 1, 42, 128, 10**16): WVPASSEQ(encode_and_decode_vuint(x), x) WVEXCEPT(Exception, vint.write_vuint, BytesIO(), -1) WVEXCEPT(EOFError, vint.read_vuint, BytesIO()) def encode_and_decode_vint(x): f = BytesIO() vint.write_vint(f, x) return vint.read_vint(BytesIO(f.getvalue())) @wvtest def test_vint(): with no_lingering_errors(): values = (0, 1, 42, 64, 10**16) for x in values: WVPASSEQ(encode_and_decode_vint(x), x) for x in [-x for x in values]: WVPASSEQ(encode_and_decode_vint(x), x) WVEXCEPT(EOFError, vint.read_vint, BytesIO()) def encode_and_decode_bvec(x): f = BytesIO() vint.write_bvec(f, x) return vint.read_bvec(BytesIO(f.getvalue())) @wvtest def test_bvec(): with no_lingering_errors(): values = ('', 'x', 'foo', '\0', '\0foo', 'foo\0bar\0') for x in values: WVPASSEQ(encode_and_decode_bvec(x), x) WVEXCEPT(EOFError, vint.read_bvec, BytesIO()) outf = BytesIO() for x in ('foo', 'bar', 'baz', 'bax'): vint.write_bvec(outf, x) inf = BytesIO(outf.getvalue()) WVPASSEQ(vint.read_bvec(inf), 'foo') WVPASSEQ(vint.read_bvec(inf), 'bar') vint.skip_bvec(inf) WVPASSEQ(vint.read_bvec(inf), 'bax') def pack_and_unpack(types, *values): data = vint.pack(types, *values) return vint.unpack(types, data) @wvtest def test_pack_and_unpack(): with no_lingering_errors(): tests = [('', []), ('s', ['foo']), ('ss', ['foo', 'bar']), ('sV', ['foo', 0]), ('sv', ['foo', -1]), ('V', [0]), ('Vs', [0, 'foo']), ('VV', [0, 1]), ('Vv', [0, -1]), ('v', [0]), ('vs', [0, 'foo']), ('vV', [0, 1]), ('vv', [0, -1])] for test in tests: (types, values) = test WVPASSEQ(pack_and_unpack(types, *values), values) WVEXCEPT(Exception, vint.pack, 's') WVEXCEPT(Exception, vint.pack, 's', 'foo', 'bar') WVEXCEPT(Exception, vint.pack, 'x', 1) WVEXCEPT(Exception, vint.unpack, 's', '') WVEXCEPT(Exception, vint.unpack, 'x', '') bup-0.29/lib/bup/t/txstat.py000066400000000000000000000110571303127641400157370ustar00rootroot00000000000000import math, tempfile, subprocess from wvtest import * import bup._helpers as _helpers from bup import xstat from buptest import no_lingering_errors, test_tempdir @wvtest def test_fstime(): with no_lingering_errors(): WVPASSEQ(xstat.timespec_to_nsecs((0, 0)), 0) WVPASSEQ(xstat.timespec_to_nsecs((1, 0)), 10**9) WVPASSEQ(xstat.timespec_to_nsecs((0, 10**9 / 2)), 500000000) WVPASSEQ(xstat.timespec_to_nsecs((1, 10**9 / 2)), 1500000000) WVPASSEQ(xstat.timespec_to_nsecs((-1, 0)), -10**9) WVPASSEQ(xstat.timespec_to_nsecs((-1, 10**9 / 2)), -500000000) WVPASSEQ(xstat.timespec_to_nsecs((-2, 10**9 / 2)), -1500000000) WVPASSEQ(xstat.timespec_to_nsecs((0, -1)), -1) WVPASSEQ(type(xstat.timespec_to_nsecs((2, 22222222))), type(0)) WVPASSEQ(type(xstat.timespec_to_nsecs((-2, 22222222))), type(0)) WVPASSEQ(xstat.nsecs_to_timespec(0), (0, 0)) WVPASSEQ(xstat.nsecs_to_timespec(10**9), (1, 0)) WVPASSEQ(xstat.nsecs_to_timespec(500000000), (0, 10**9 / 2)) WVPASSEQ(xstat.nsecs_to_timespec(1500000000), (1, 10**9 / 2)) WVPASSEQ(xstat.nsecs_to_timespec(-10**9), (-1, 0)) WVPASSEQ(xstat.nsecs_to_timespec(-500000000), (-1, 10**9 / 2)) WVPASSEQ(xstat.nsecs_to_timespec(-1500000000), (-2, 10**9 / 2)) x = xstat.nsecs_to_timespec(1977777778) WVPASSEQ(type(x[0]), type(0)) WVPASSEQ(type(x[1]), type(0)) x = xstat.nsecs_to_timespec(-1977777778) WVPASSEQ(type(x[0]), type(0)) WVPASSEQ(type(x[1]), type(0)) WVPASSEQ(xstat.nsecs_to_timeval(0), (0, 0)) WVPASSEQ(xstat.nsecs_to_timeval(10**9), (1, 0)) WVPASSEQ(xstat.nsecs_to_timeval(500000000), (0, (10**9 / 2) / 1000)) WVPASSEQ(xstat.nsecs_to_timeval(1500000000), (1, (10**9 / 2) / 1000)) WVPASSEQ(xstat.nsecs_to_timeval(-10**9), (-1, 0)) WVPASSEQ(xstat.nsecs_to_timeval(-500000000), (-1, (10**9 / 2) / 1000)) WVPASSEQ(xstat.nsecs_to_timeval(-1500000000), (-2, (10**9 / 2) / 1000)) x = xstat.nsecs_to_timeval(1977777778) WVPASSEQ(type(x[0]), type(0)) WVPASSEQ(type(x[1]), type(0)) x = xstat.nsecs_to_timeval(-1977777778) WVPASSEQ(type(x[0]), type(0)) WVPASSEQ(type(x[1]), type(0)) WVPASSEQ(xstat.fstime_floor_secs(0), 0) WVPASSEQ(xstat.fstime_floor_secs(10**9 / 2), 0) WVPASSEQ(xstat.fstime_floor_secs(10**9), 1) WVPASSEQ(xstat.fstime_floor_secs(-10**9 / 2), -1) WVPASSEQ(xstat.fstime_floor_secs(-10**9), -1) WVPASSEQ(type(xstat.fstime_floor_secs(10**9 / 2)), type(0)) WVPASSEQ(type(xstat.fstime_floor_secs(-10**9 / 2)), type(0)) @wvtest def test_bup_utimensat(): if not xstat._bup_utimensat: return with no_lingering_errors(): with test_tempdir('bup-txstat-') as tmpdir: path = tmpdir + '/foo' open(path, 'w').close() frac_ts = (0, 10**9 / 2) xstat._bup_utimensat(_helpers.AT_FDCWD, path, (frac_ts, frac_ts), 0) st = _helpers.stat(path) atime_ts = st[8] mtime_ts = st[9] WVPASSEQ(atime_ts[0], 0) WVPASS(atime_ts[1] == 0 or atime_ts[1] == frac_ts[1]) WVPASSEQ(mtime_ts[0], 0) WVPASS(mtime_ts[1] == 0 or mtime_ts[1] == frac_ts[1]) @wvtest def test_bup_utimes(): if not xstat._bup_utimes: return with no_lingering_errors(): with test_tempdir('bup-txstat-') as tmpdir: path = tmpdir + '/foo' open(path, 'w').close() frac_ts = (0, 10**6 / 2) xstat._bup_utimes(path, (frac_ts, frac_ts)) st = _helpers.stat(path) atime_ts = st[8] mtime_ts = st[9] WVPASSEQ(atime_ts[0], 0) WVPASS(atime_ts[1] == 0 or atime_ts[1] == frac_ts[1] * 1000) WVPASSEQ(mtime_ts[0], 0) WVPASS(mtime_ts[1] == 0 or mtime_ts[1] == frac_ts[1] * 1000) @wvtest def test_bup_lutimes(): if not xstat._bup_lutimes: return with no_lingering_errors(): with test_tempdir('bup-txstat-') as tmpdir: path = tmpdir + '/foo' open(path, 'w').close() frac_ts = (0, 10**6 / 2) xstat._bup_lutimes(path, (frac_ts, frac_ts)) st = _helpers.stat(path) atime_ts = st[8] mtime_ts = st[9] WVPASSEQ(atime_ts[0], 0) WVPASS(atime_ts[1] == 0 or atime_ts[1] == frac_ts[1] * 1000) WVPASSEQ(mtime_ts[0], 0) WVPASS(mtime_ts[1] == 0 or mtime_ts[1] == frac_ts[1] * 1000) bup-0.29/lib/bup/version.py000066400000000000000000000002511303127641400156240ustar00rootroot00000000000000 from bup import _release if _release.COMMIT != '$Format:%H$': from bup._release import COMMIT, DATE, NAMES else: from bup._checkout import COMMIT, DATE, NAMES bup-0.29/lib/bup/vfs.py000066400000000000000000000474441303127641400147540ustar00rootroot00000000000000"""Virtual File System representing bup's repository contents. The vfs.py library makes it possible to expose contents from bup's repository and abstracts internal name mangling and storage from the exposition layer. """ import os, re, stat, time from bup import git, metadata from helpers import debug1, debug2 from bup.git import BUP_NORMAL, BUP_CHUNKED, cp from bup.hashsplit import GIT_MODE_TREE, GIT_MODE_FILE EMPTY_SHA='\0'*20 class NodeError(Exception): """VFS base exception.""" pass class NoSuchFile(NodeError): """Request of a file that does not exist.""" pass class NotDir(NodeError): """Attempt to do a directory action on a file that is not one.""" pass class NotFile(NodeError): """Access to a node that does not represent a file.""" pass class TooManySymlinks(NodeError): """Symlink dereferencing level is too deep.""" pass def _treeget(hash, repo_dir=None): it = cp(repo_dir).get(hash.encode('hex')) type = it.next() assert(type == 'tree') return git.tree_decode(''.join(it)) def _tree_decode(hash, repo_dir=None): tree = [(int(name,16),stat.S_ISDIR(mode),sha) for (mode,name,sha) in _treeget(hash, repo_dir)] assert(tree == list(sorted(tree))) return tree def _chunk_len(hash, repo_dir=None): return sum(len(b) for b in cp(repo_dir).join(hash.encode('hex'))) def _last_chunk_info(hash, repo_dir=None): tree = _tree_decode(hash, repo_dir) assert(tree) (ofs,isdir,sha) = tree[-1] if isdir: (subofs, sublen) = _last_chunk_info(sha, repo_dir) return (ofs+subofs, sublen) else: return (ofs, _chunk_len(sha)) def _total_size(hash, repo_dir=None): (lastofs, lastsize) = _last_chunk_info(hash, repo_dir) return lastofs + lastsize def _chunkiter(hash, startofs, repo_dir=None): assert(startofs >= 0) tree = _tree_decode(hash, repo_dir) # skip elements before startofs for i in xrange(len(tree)): if i+1 >= len(tree) or tree[i+1][0] > startofs: break first = i # iterate through what's left for i in xrange(first, len(tree)): (ofs,isdir,sha) = tree[i] skipmore = startofs-ofs if skipmore < 0: skipmore = 0 if isdir: for b in _chunkiter(sha, skipmore, repo_dir): yield b else: yield ''.join(cp(repo_dir).join(sha.encode('hex')))[skipmore:] class _ChunkReader: def __init__(self, hash, isdir, startofs, repo_dir=None): if isdir: self.it = _chunkiter(hash, startofs, repo_dir) self.blob = None else: self.it = None self.blob = ''.join(cp(repo_dir).join(hash.encode('hex')))[startofs:] self.ofs = startofs def next(self, size): out = '' while len(out) < size: if self.it and not self.blob: try: self.blob = self.it.next() except StopIteration: self.it = None if self.blob: want = size - len(out) out += self.blob[:want] self.blob = self.blob[want:] if not self.it: break debug2('next(%d) returned %d\n' % (size, len(out))) self.ofs += len(out) return out class _FileReader(object): def __init__(self, hash, size, isdir, repo_dir=None): self.hash = hash self.ofs = 0 self.size = size self.isdir = isdir self.reader = None self._repo_dir = repo_dir def seek(self, ofs): if ofs > self.size: self.ofs = self.size elif ofs < 0: self.ofs = 0 else: self.ofs = ofs def tell(self): return self.ofs def read(self, count = -1): if count < 0: count = self.size - self.ofs if not self.reader or self.reader.ofs != self.ofs: self.reader = _ChunkReader(self.hash, self.isdir, self.ofs, self._repo_dir) try: buf = self.reader.next(count) except: self.reader = None raise # our offsets will be all screwed up otherwise self.ofs += len(buf) return buf def close(self): pass class Node(object): """Base class for file representation.""" def __init__(self, parent, name, mode, hash, repo_dir=None): self.parent = parent self.name = name self.mode = mode self.hash = hash self.ctime = self.mtime = self.atime = 0 self._repo_dir = repo_dir self._subs = None self._metadata = None def __repr__(self): return "<%s object at %s - name:%r hash:%s parent:%r>" \ % (self.__class__, hex(id(self)), self.name, self.hash.encode('hex'), self.parent.name if self.parent else None) def __cmp__(a, b): if a is b: return 0 return (cmp(a and a.parent, b and b.parent) or cmp(a and a.name, b and b.name)) def __iter__(self): return iter(self.subs()) def fullname(self, stop_at=None): """Get this file's full path.""" assert(self != stop_at) # would be the empty string; too weird if self.parent and self.parent != stop_at: return os.path.join(self.parent.fullname(stop_at=stop_at), self.name) else: return self.name def _mksubs(self): self._subs = {} def subs(self): """Get a list of nodes that are contained in this node.""" if self._subs == None: self._mksubs() return sorted(self._subs.values()) def sub(self, name): """Get node named 'name' that is contained in this node.""" if self._subs == None: self._mksubs() ret = self._subs.get(name) if not ret: raise NoSuchFile("no file %r in %r" % (name, self.name)) return ret def top(self): """Return the very top node of the tree.""" if self.parent: return self.parent.top() else: return self def fs_top(self): """Return the top node of the particular backup set. If this node isn't inside a backup set, return the root level. """ if self.parent and not isinstance(self.parent, CommitList): return self.parent.fs_top() else: return self def _lresolve(self, parts): #debug2('_lresolve %r in %r\n' % (parts, self.name)) if not parts: return self (first, rest) = (parts[0], parts[1:]) if first == '.': return self._lresolve(rest) elif first == '..': if not self.parent: raise NoSuchFile("no parent dir for %r" % self.name) return self.parent._lresolve(rest) elif rest: return self.sub(first)._lresolve(rest) else: return self.sub(first) def lresolve(self, path, stay_inside_fs=False): """Walk into a given sub-path of this node. If the last element is a symlink, leave it as a symlink, don't resolve it. (like lstat()) """ start = self if not path: return start if path.startswith('/'): if stay_inside_fs: start = self.fs_top() else: start = self.top() path = path[1:] parts = re.split(r'/+', path or '.') if not parts[-1]: parts[-1] = '.' #debug2('parts: %r %r\n' % (path, parts)) return start._lresolve(parts) def resolve(self, path = ''): """Like lresolve(), and dereference it if it was a symlink.""" return self.lresolve(path).lresolve('.') def try_resolve(self, path = ''): """Like resolve(), but don't worry if a symlink uses an invalid path. Returns an error if any intermediate nodes were invalid. """ n = self.lresolve(path) try: n = n.lresolve('.') except NoSuchFile: pass return n def nlinks(self): """Get the number of hard links to the current node.""" return 1 def size(self): """Get the size of the current node.""" return 0 def open(self): """Open the current node. It is an error to open a non-file node.""" raise NotFile('%s is not a regular file' % self.name) def _populate_metadata(self, force=False): # Only Dirs contain .bupm files, so by default, do nothing. pass def metadata(self): """Return this Node's Metadata() object, if any.""" if not self._metadata and self.parent: self.parent._populate_metadata(force=True) return self._metadata def release(self): """Release resources that can be automatically restored (at a cost).""" self._metadata = None self._subs = None class File(Node): """A normal file from bup's repository.""" def __init__(self, parent, name, mode, hash, bupmode, repo_dir=None): Node.__init__(self, parent, name, mode, hash, repo_dir) self.bupmode = bupmode self._cached_size = None self._filereader = None def open(self): """Open the file.""" # You'd think FUSE might call this only once each time a file is # opened, but no; it's really more of a refcount, and it's called # once per read(). Thus, it's important to cache the filereader # object here so we're not constantly re-seeking. if not self._filereader: self._filereader = _FileReader(self.hash, self.size(), self.bupmode == git.BUP_CHUNKED, repo_dir = self._repo_dir) self._filereader.seek(0) return self._filereader def size(self): """Get this file's size.""" if self._cached_size == None: debug1('<<< 100: raise TooManySymlinks('too many levels of symlinks: %r' % self.fullname()) _symrefs += 1 try: try: return self.parent.lresolve(self.readlink(), stay_inside_fs=True) except NoSuchFile: raise NoSuchFile("%s: broken symlink to %r" % (self.fullname(), self.readlink())) finally: _symrefs -= 1 def _lresolve(self, parts): return self.dereference()._lresolve(parts) class FakeSymlink(Symlink): """A symlink that is not stored in the bup repository.""" def __init__(self, parent, name, toname, repo_dir=None): Symlink.__init__(self, parent, name, EMPTY_SHA, git.BUP_NORMAL, repo_dir = repo_dir) self.toname = toname def readlink(self): """Get the path that this link points at.""" return self.toname class Dir(Node): """A directory stored inside of bup's repository.""" def __init__(self, *args, **kwargs): Node.__init__(self, *args, **kwargs) self._bupm = None def _populate_metadata(self, force=False): if self._metadata and not force: return if not self._subs: self._mksubs() if not self._bupm: return meta_stream = self._bupm.open() dir_meta = metadata.Metadata.read(meta_stream) for sub in self: if not stat.S_ISDIR(sub.mode): sub._metadata = metadata.Metadata.read(meta_stream) self._metadata = dir_meta def _mksubs(self): self._subs = {} it = cp(self._repo_dir).get(self.hash.encode('hex')) type = it.next() if type == 'commit': del it it = cp(self._repo_dir).get(self.hash.encode('hex') + ':') type = it.next() assert(type == 'tree') for (mode,mangled_name,sha) in git.tree_decode(''.join(it)): if mangled_name == '.bupm': bupmode = stat.S_ISDIR(mode) and BUP_CHUNKED or BUP_NORMAL self._bupm = File(self, mangled_name, GIT_MODE_FILE, sha, bupmode) continue name, bupmode = git.demangle_name(mangled_name, mode) if bupmode == git.BUP_CHUNKED: mode = GIT_MODE_FILE if stat.S_ISDIR(mode): self._subs[name] = Dir(self, name, mode, sha, self._repo_dir) elif stat.S_ISLNK(mode): self._subs[name] = Symlink(self, name, sha, bupmode, self._repo_dir) else: self._subs[name] = File(self, name, mode, sha, bupmode, self._repo_dir) def metadata(self): """Return this Dir's Metadata() object, if any.""" self._populate_metadata() return self._metadata def metadata_file(self): """Return this Dir's .bupm File, if any.""" if not self._subs: self._mksubs() return self._bupm def release(self): """Release restorable resources held by this node.""" self._bupm = None super(Dir, self).release() class CommitDir(Node): """A directory that contains all commits that are reachable by a ref. Contains a set of subdirectories named after the commits' first byte in hexadecimal. Each of those directories contain all commits with hashes that start the same as the directory name. The name used for those subdirectories is the hash of the commit without the first byte. This separation helps us avoid having too much directories on the same level as the number of commits grows big. """ def __init__(self, parent, name, repo_dir=None): Node.__init__(self, parent, name, GIT_MODE_TREE, EMPTY_SHA, repo_dir) def _mksubs(self): self._subs = {} refs = git.list_refs(repo_dir = self._repo_dir) for ref in refs: #debug2('ref name: %s\n' % ref[0]) revs = git.rev_list(ref[1].encode('hex'), repo_dir = self._repo_dir) for (date, commit) in revs: #debug2('commit: %s date: %s\n' % (commit.encode('hex'), date)) commithex = commit.encode('hex') containername = commithex[:2] dirname = commithex[2:] n1 = self._subs.get(containername) if not n1: n1 = CommitList(self, containername, self._repo_dir) self._subs[containername] = n1 if n1.commits.get(dirname): # Stop work for this ref, the rest should already be present break n1.commits[dirname] = (commit, date) class CommitList(Node): """A list of commits with hashes that start with the current node's name.""" def __init__(self, parent, name, repo_dir=None): Node.__init__(self, parent, name, GIT_MODE_TREE, EMPTY_SHA, repo_dir) self.commits = {} def _mksubs(self): self._subs = {} for (name, (hash, date)) in self.commits.items(): n1 = Dir(self, name, GIT_MODE_TREE, hash, self._repo_dir) n1.ctime = n1.mtime = date self._subs[name] = n1 class TagDir(Node): """A directory that contains all tags in the repository.""" def __init__(self, parent, name, repo_dir = None): Node.__init__(self, parent, name, GIT_MODE_TREE, EMPTY_SHA, repo_dir) def _mksubs(self): self._subs = {} for (name, sha) in git.list_refs(repo_dir = self._repo_dir): if name.startswith('refs/tags/'): name = name[10:] date = git.get_commit_dates([sha.encode('hex')], repo_dir=self._repo_dir)[0] commithex = sha.encode('hex') target = '../.commit/%s/%s' % (commithex[:2], commithex[2:]) tag1 = FakeSymlink(self, name, target, self._repo_dir) tag1.ctime = tag1.mtime = date self._subs[name] = tag1 class BranchList(Node): """A list of links to commits reachable by a branch in bup's repository. Represents each commit as a symlink that points to the commit directory in /.commit/??/ . The symlink is named after the commit date. """ def __init__(self, parent, name, hash, repo_dir=None): Node.__init__(self, parent, name, GIT_MODE_TREE, hash, repo_dir) def _mksubs(self): self._subs = {} revs = list(git.rev_list(self.hash.encode('hex'), repo_dir=self._repo_dir)) latest = revs[0] for (date, commit) in revs: l = time.localtime(date) ls = time.strftime('%Y-%m-%d-%H%M%S', l) commithex = commit.encode('hex') target = '../.commit/%s/%s' % (commithex[:2], commithex[2:]) n1 = FakeSymlink(self, ls, target, self._repo_dir) n1.ctime = n1.mtime = date self._subs[ls] = n1 (date, commit) = latest commithex = commit.encode('hex') target = '../.commit/%s/%s' % (commithex[:2], commithex[2:]) n1 = FakeSymlink(self, 'latest', target, self._repo_dir) n1.ctime = n1.mtime = date self._subs['latest'] = n1 class RefList(Node): """A list of branches in bup's repository. The sub-nodes of the ref list are a series of CommitList for each commit hash pointed to by a branch. Also, a special sub-node named '.commit' contains all commit directories that are reachable via a ref (e.g. a branch). See CommitDir for details. """ def __init__(self, parent, repo_dir=None): Node.__init__(self, parent, '/', GIT_MODE_TREE, EMPTY_SHA, repo_dir) def _mksubs(self): self._subs = {} commit_dir = CommitDir(self, '.commit', self._repo_dir) self._subs['.commit'] = commit_dir tag_dir = TagDir(self, '.tag', self._repo_dir) self._subs['.tag'] = tag_dir refs_info = [(name[11:], sha) for (name,sha) in git.list_refs(repo_dir=self._repo_dir) if name.startswith('refs/heads/')] dates = git.get_commit_dates([sha.encode('hex') for (name, sha) in refs_info], repo_dir=self._repo_dir) for (name, sha), date in zip(refs_info, dates): n1 = BranchList(self, name, sha, self._repo_dir) n1.ctime = n1.mtime = date self._subs[name] = n1 bup-0.29/lib/bup/vint.py000066400000000000000000000065271303127641400151330ustar00rootroot00000000000000"""Binary encodings for bup.""" # Copyright (C) 2010 Rob Browning # # This code is covered under the terms of the GNU Library General # Public License as described in the bup LICENSE file. from io import BytesIO # Variable length integers are encoded as vints -- see jakarta lucene. def write_vuint(port, x): if x < 0: raise Exception("vuints must not be negative") elif x == 0: port.write('\0') else: while x: seven_bits = x & 0x7f x >>= 7 if x: port.write(chr(0x80 | seven_bits)) else: port.write(chr(seven_bits)) def read_vuint(port): c = port.read(1) if c == '': raise EOFError('encountered EOF while reading vuint'); result = 0 offset = 0 while c: b = ord(c) if b & 0x80: result |= ((b & 0x7f) << offset) offset += 7 c = port.read(1) else: result |= (b << offset) break return result def write_vint(port, x): # Sign is handled with the second bit of the first byte. All else # matches vuint. if x == 0: port.write('\0') else: if x < 0: x = -x sign_and_six_bits = (x & 0x3f) | 0x40 else: sign_and_six_bits = x & 0x3f x >>= 6 if x: port.write(chr(0x80 | sign_and_six_bits)) write_vuint(port, x) else: port.write(chr(sign_and_six_bits)) def read_vint(port): c = port.read(1) if c == '': raise EOFError('encountered EOF while reading vint'); negative = False result = 0 offset = 0 # Handle first byte with sign bit specially. if c: b = ord(c) if b & 0x40: negative = True result |= (b & 0x3f) if b & 0x80: offset += 6 c = port.read(1) elif negative: return -result else: return result while c: b = ord(c) if b & 0x80: result |= ((b & 0x7f) << offset) offset += 7 c = port.read(1) else: result |= (b << offset) break if negative: return -result else: return result def write_bvec(port, x): write_vuint(port, len(x)) port.write(x) def read_bvec(port): n = read_vuint(port) return port.read(n) def skip_bvec(port): port.read(read_vuint(port)) def pack(types, *args): if len(types) != len(args): raise Exception('number of arguments does not match format string') port = BytesIO() for (type, value) in zip(types, args): if type == 'V': write_vuint(port, value) elif type == 'v': write_vint(port, value) elif type == 's': write_bvec(port, value) else: raise Exception('unknown xpack format string item "' + type + '"') return port.getvalue() def unpack(types, data): result = [] port = BytesIO(data) for type in types: if type == 'V': result.append(read_vuint(port)) elif type == 'v': result.append(read_vint(port)) elif type == 's': result.append(read_bvec(port)) else: raise Exception('unknown xunpack format string item "' + type + '"') return result bup-0.29/lib/bup/xstat.py000066400000000000000000000115371303127641400153130ustar00rootroot00000000000000"""Enhanced stat operations for bup.""" import os, sys import stat as pystat from bup import _helpers try: _bup_utimensat = _helpers.bup_utimensat except AttributeError as e: _bup_utimensat = False try: _bup_utimes = _helpers.bup_utimes except AttributeError as e: _bup_utimes = False try: _bup_lutimes = _helpers.bup_lutimes except AttributeError as e: _bup_lutimes = False def timespec_to_nsecs((ts_s, ts_ns)): return ts_s * 10**9 + ts_ns def nsecs_to_timespec(ns): """Return (s, ns) where ns is always non-negative and t = s + ns / 10e8""" # metadata record rep ns = int(ns) return (ns / 10**9, ns % 10**9) def nsecs_to_timeval(ns): """Return (s, us) where ns is always non-negative and t = s + us / 10e5""" ns = int(ns) return (ns / 10**9, (ns % 10**9) / 1000) def fstime_floor_secs(ns): """Return largest integer not greater than ns / 10e8.""" return int(ns) / 10**9; def fstime_to_timespec(ns): return nsecs_to_timespec(ns) def fstime_to_sec_str(fstime): (s, ns) = fstime_to_timespec(fstime) if(s < 0): s += 1 if ns == 0: return '%d' % s else: return '%d.%09d' % (s, ns) if _bup_utimensat: def utime(path, times): """Times must be provided as (atime_ns, mtime_ns).""" atime = nsecs_to_timespec(times[0]) mtime = nsecs_to_timespec(times[1]) _bup_utimensat(_helpers.AT_FDCWD, path, (atime, mtime), 0) def lutime(path, times): """Times must be provided as (atime_ns, mtime_ns).""" atime = nsecs_to_timespec(times[0]) mtime = nsecs_to_timespec(times[1]) _bup_utimensat(_helpers.AT_FDCWD, path, (atime, mtime), _helpers.AT_SYMLINK_NOFOLLOW) else: # Must have these if utimensat isn't available. def utime(path, times): """Times must be provided as (atime_ns, mtime_ns).""" atime = nsecs_to_timeval(times[0]) mtime = nsecs_to_timeval(times[1]) _bup_utimes(path, (atime, mtime)) def lutime(path, times): """Times must be provided as (atime_ns, mtime_ns).""" atime = nsecs_to_timeval(times[0]) mtime = nsecs_to_timeval(times[1]) _bup_lutimes(path, (atime, mtime)) _cygwin_sys = sys.platform.startswith('cygwin') def _fix_cygwin_id(id): if id < 0: id += 0x100000000 assert(id >= 0) return id class stat_result: @staticmethod def from_xstat_rep(st): global _cygwin_sys result = stat_result() (result.st_mode, result.st_ino, result.st_dev, result.st_nlink, result.st_uid, result.st_gid, result.st_rdev, result.st_size, result.st_atime, result.st_mtime, result.st_ctime) = st # Inlined timespec_to_nsecs after profiling result.st_atime = result.st_atime[0] * 10**9 + result.st_atime[1] result.st_mtime = result.st_mtime[0] * 10**9 + result.st_mtime[1] result.st_ctime = result.st_ctime[0] * 10**9 + result.st_ctime[1] if _cygwin_sys: result.st_uid = _fix_cygwin_id(result.st_uid) result.st_gid = _fix_cygwin_id(result.st_gid) return result def stat(path): return stat_result.from_xstat_rep(_helpers.stat(path)) def fstat(path): return stat_result.from_xstat_rep(_helpers.fstat(path)) def lstat(path): return stat_result.from_xstat_rep(_helpers.lstat(path)) def mode_str(mode): result = '' # FIXME: Other types? if pystat.S_ISREG(mode): result += '-' elif pystat.S_ISDIR(mode): result += 'd' elif pystat.S_ISCHR(mode): result += 'c' elif pystat.S_ISBLK(mode): result += 'b' elif pystat.S_ISFIFO(mode): result += 'p' elif pystat.S_ISLNK(mode): result += 'l' elif pystat.S_ISSOCK(mode): result += 's' else: result += '?' result += 'r' if (mode & pystat.S_IRUSR) else '-' result += 'w' if (mode & pystat.S_IWUSR) else '-' result += 'x' if (mode & pystat.S_IXUSR) else '-' result += 'r' if (mode & pystat.S_IRGRP) else '-' result += 'w' if (mode & pystat.S_IWGRP) else '-' result += 'x' if (mode & pystat.S_IXGRP) else '-' result += 'r' if (mode & pystat.S_IROTH) else '-' result += 'w' if (mode & pystat.S_IWOTH) else '-' result += 'x' if (mode & pystat.S_IXOTH) else '-' return result def classification_str(mode, include_exec): if pystat.S_ISREG(mode): if include_exec \ and (pystat.S_IMODE(mode) \ & (pystat.S_IXUSR | pystat.S_IXGRP | pystat.S_IXOTH)): return '*' else: return '' elif pystat.S_ISDIR(mode): return '/' elif pystat.S_ISLNK(mode): return '@' elif pystat.S_ISFIFO(mode): return '|' elif pystat.S_ISSOCK(mode): return '=' else: return '' bup-0.29/lib/web/000077500000000000000000000000001303127641400135565ustar00rootroot00000000000000bup-0.29/lib/web/list-directory.html000066400000000000000000000031001303127641400174130ustar00rootroot00000000000000{% comment This template expects the default xhtml autoescaping. %} Directory listing for {{ path }}
{% if files_hidden %}
{% if hidden_shown %} Hide hidden files {% else %} Show hidden files {% end %}
{% end %} {% for (display, link, size) in dir_contents %} {% end %}
Name Size
{{ display }} {% if size != None %}{{ size }}{% else %} {% end %}
bup-0.29/lib/web/static/000077500000000000000000000000001303127641400150455ustar00rootroot00000000000000bup-0.29/lib/web/static/styles.css000066400000000000000000000003311303127641400170770ustar00rootroot00000000000000body { font-family: sans-serif } #wrapper { width: 90%; margin: auto; } #breadcrumb { margin: 10px 0; } table { width: auto; } th { text-align: left; } .dir-size { padding-left:15px; }bup-0.29/main.py000077500000000000000000000151051303127641400135360ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- # -*-python-*- bup_python="$(dirname "$0")/cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import sys, os, subprocess, signal, getopt argv = sys.argv exe = os.path.realpath(argv[0]) exepath = os.path.split(exe)[0] or '.' exeprefix = os.path.split(os.path.abspath(exepath))[0] # fix the PYTHONPATH to include our lib dir if os.path.exists("%s/lib/bup/cmd/." % exeprefix): # installed binary in /.../bin. # eg. /usr/bin/bup means /usr/lib/bup/... is where our libraries are. cmdpath = "%s/lib/bup/cmd" % exeprefix libpath = "%s/lib/bup" % exeprefix resourcepath = libpath else: # running from the src directory without being installed first cmdpath = os.path.join(exepath, 'cmd') libpath = os.path.join(exepath, 'lib') resourcepath = libpath sys.path[:0] = [libpath] os.environ['PYTHONPATH'] = libpath + ':' + os.environ.get('PYTHONPATH', '') os.environ['BUP_MAIN_EXE'] = os.path.abspath(exe) os.environ['BUP_RESOURCE_PATH'] = resourcepath from bup import helpers from bup.helpers import atoi, columnate, debug1, log, tty_width # after running 'bup newliner', the tty_width() ioctl won't work anymore os.environ['WIDTH'] = str(tty_width()) def usage(msg=""): log('Usage: bup [-?|--help] [-d BUP_DIR] [--debug] [--profile] ' ' [options...]\n\n') common = dict( ftp = 'Browse backup sets using an ftp-like client', fsck = 'Check backup sets for damage and add redundancy information', fuse = 'Mount your backup sets as a filesystem', help = 'Print detailed help for the given command', index = 'Create or display the index of files to back up', on = 'Backup a remote machine to the local one', restore = 'Extract files from a backup set', save = 'Save files into a backup set (note: run "bup index" first)', tag = 'Tag commits for easier access', web = 'Launch a web server to examine backup sets', ) log('Common commands:\n') for cmd,synopsis in sorted(common.items()): log(' %-10s %s\n' % (cmd, synopsis)) log('\n') log('Other available commands:\n') cmds = [] for c in sorted(os.listdir(cmdpath) + os.listdir(exepath)): if c.startswith('bup-') and c.find('.') < 0: cname = c[4:] if cname not in common: cmds.append(c[4:]) log(columnate(cmds, ' ')) log('\n') log("See 'bup help COMMAND' for more information on " + "a specific command.\n") if msg: log("\n%s\n" % msg) sys.exit(99) if len(argv) < 2: usage() # Handle global options. try: optspec = ['help', 'version', 'debug', 'profile', 'bup-dir='] global_args, subcmd = getopt.getopt(argv[1:], '?VDd:', optspec) except getopt.GetoptError as ex: usage('error: %s' % ex.msg) help_requested = None do_profile = False for opt in global_args: if opt[0] in ['-?', '--help']: help_requested = True elif opt[0] in ['-V', '--version']: subcmd = ['version'] elif opt[0] in ['-D', '--debug']: helpers.buglvl += 1 os.environ['BUP_DEBUG'] = str(helpers.buglvl) elif opt[0] in ['--profile']: do_profile = True elif opt[0] in ['-d', '--bup-dir']: os.environ['BUP_DIR'] = opt[1] else: usage('error: unexpected option "%s"' % opt[0]) # Make BUP_DIR absolute, so we aren't affected by chdir (i.e. save -C, etc.). if 'BUP_DIR' in os.environ: os.environ['BUP_DIR'] = os.path.abspath(os.environ['BUP_DIR']) if len(subcmd) == 0: if help_requested: subcmd = ['help'] else: usage() if help_requested and subcmd[0] != 'help': subcmd = ['help'] + subcmd if len(subcmd) > 1 and subcmd[1] == '--help' and subcmd[0] != 'help': subcmd = ['help', subcmd[0]] + subcmd[2:] subcmd_name = subcmd[0] if not subcmd_name: usage() def subpath(s): sp = os.path.join(exepath, 'bup-%s' % s) if not os.path.exists(sp): sp = os.path.join(cmdpath, 'bup-%s' % s) return sp subcmd[0] = subpath(subcmd_name) if not os.path.exists(subcmd[0]): usage('error: unknown command "%s"' % subcmd_name) already_fixed = atoi(os.environ.get('BUP_FORCE_TTY')) if subcmd_name in ['mux', 'ftp', 'help']: already_fixed = True fix_stdout = not already_fixed and os.isatty(1) fix_stderr = not already_fixed and os.isatty(2) def force_tty(): if fix_stdout or fix_stderr: amt = (fix_stdout and 1 or 0) + (fix_stderr and 2 or 0) os.environ['BUP_FORCE_TTY'] = str(amt) os.setsid() # make sure ctrl-c is sent just to us, not to child too if fix_stdout or fix_stderr: realf = fix_stderr and 2 or 1 drealf = os.dup(realf) # Popen goes crazy with stdout=2 n = subprocess.Popen([subpath('newliner')], stdin=subprocess.PIPE, stdout=drealf, close_fds=True, preexec_fn=force_tty) os.close(drealf) outf = fix_stdout and n.stdin.fileno() or None errf = fix_stderr and n.stdin.fileno() or None else: n = None outf = None errf = None ret = 95 p = None forward_signals = True def handler(signum, frame): debug1('\nbup: signal %d received\n' % signum) if not p or not forward_signals: return if signum != signal.SIGTSTP: os.kill(p.pid, signum) else: # SIGTSTP: stop the child, then ourselves. os.kill(p.pid, signal.SIGSTOP) signal.signal(signal.SIGTSTP, signal.SIG_DFL) os.kill(os.getpid(), signal.SIGTSTP) # Back from suspend -- reestablish the handler. signal.signal(signal.SIGTSTP, handler) ret = 94 signal.signal(signal.SIGTERM, handler) signal.signal(signal.SIGINT, handler) signal.signal(signal.SIGTSTP, handler) signal.signal(signal.SIGCONT, handler) try: try: c = (do_profile and [sys.executable, '-m', 'cProfile'] or []) + subcmd if not n and not outf and not errf: # shortcut when no bup-newliner stuff is needed os.execvp(c[0], c) else: p = subprocess.Popen(c, stdout=outf, stderr=errf, preexec_fn=force_tty) while 1: # if we get a signal while waiting, we have to keep waiting, just # in case our child doesn't die. ret = p.wait() forward_signals = False break except OSError as e: log('%s: %s\n' % (subcmd[0], e)) ret = 98 finally: if p and p.poll() == None: os.kill(p.pid, signal.SIGTERM) p.wait() if n: n.stdin.close() try: n.wait() except: pass sys.exit(ret) bup-0.29/note/000077500000000000000000000000001303127641400132005ustar00rootroot00000000000000bup-0.29/note/0.27.1-from-0.27.md000066400000000000000000000016331303127641400155770ustar00rootroot00000000000000 Notable changes in 0.27.1 as compared to 0.27 ============================================= May require attention --------------------- * In previous versions, a `--sparse` restore might have produced incorrect data. Please treat any existing `--sparse` restores as suspect. The problem should be fixed in this release, and the `--sparse` tests have been substantially augmented. Thanks to (at least) ==================== Frank Gevaerts (1): restore: test --sparse with zeros at 64k boundary Greg Troxel Marcus Schopen Rob Browning (7): Use $RANDOM seed for --sparse random tests restore: add generative --sparse testing restore: fix --sparse corruption Merge restore --sparse corruption fix Add note/0.27.1-from-0.27.md and mention in README restore: fix --sparse fix (find_non_sparse_end) test_server_split_with_indexes: close packwriter Robert S. Edmonds bup-0.29/note/0.28-from-0.27.1.md000066400000000000000000000107731303127641400156050ustar00rootroot00000000000000 Notable changes in 0.28 as compared to 0.27.1 ============================================= May require attention --------------------- * The default install PREFIX is now "/usr/local". * BINDIR, DOCDIR, LIBDIR, and MANDIR settings no longer side-step DESTDIR. i.e. `make DESTDIR=/x MANDIR=/y` install will install the manpages to "/x/y" not just "/y". * The index format has changed, which will trigger a full index rebuild on the next index run, making that run more expensive than usual. * When given `--xdev`, `bup save` should no longer skip directories that are explicitly listed on the command line when the directory is both on a separate filesystem, and a subtree of another path listed on the command line. Previously `bup save --xdev / /usr` could skip "/usr" if it was on a separate filesystem from "/". * Tags along a branch are no longer shown in the branch's directory in the virtual filesystem (VFS). i.e. given `bup tag special /foo/latest`, "/foo/special" will no longer be visible via `bup ls`, `bup web`, `bup fuse`, etc., but the tag will still be available as "/.tag/special". General ------- * bup now provides experimental `rm` and `gc` subcommands, which should allow branches and saves to be deleted, and their storage space reclaimed (assuming nothing else refers to the relevant data). For the moment, these commands require an `--unsafe` argument and should be treated accordingly. Although if an attempt to `join` or `restore` the data you still care about after a `gc` succeeds, that's a fairly encouraging sign that the commands worked correctly. (The `t/compare-trees` command in the source tree can be used to help test before/after results.) Note that the current `gc` command is probabilistic, which means it may not remove *all* of the obsolete data from the repository, but also means that the command should be fairly efficient, even for large repositories. * bup may have less impact on the filesystem cache. It now attempts to leave the cache roughly the way it found it when running a `save` or `split`. * A specific Python can be specified at `./configure` time via PYTHON, i.e. `PYTHON=/some/python ./configure`, and that Python will be embedded in all of the relevant scripts as an explicit "#!/..." line during `make install`. * `bup web` will now attempt an orderly shutdown when it receives a SIGTERM. * `bup web` will listen on a filesystem socket when given an address like "unix://...". * bup no longer limits the number of files in a directory to 100000. The limit is now UINT_MAX. * `bup fuse` now has a `--verbose` argument, and responds to `--debug`. Bugs ---- * bup save should not fail when asked to save a subdirectory of a directory that was completely up to date in the index. Previously this could cause a "shalists" assertion failure. * The way bup writes the data to disk (the packfiles in particular), should be a bit safer now if there is a coincident power failure or system crash. * A problem has been fixed that could cause bup to ignore the current TZ setting when computing the local time. * bup should no longer generate broken commits when the timezone offset is not an integer number of hours (e.g. TZ=Australia/Adelaide). * `bup midx --output` should now work when used with `--auto` or `--force`. * `bup import-rsnapshot` should exit with a status of 1, not -1. * bup should be more likely to get the data to permanent storage safely on OS X, which appears to follow a surprising interpretation of the `fsync()` specification. * `bup web` should handle non-ASCII paths better. It will no longer treat them as (and try to convert them to) Unicode (which they're not). * `bup restore` should no longer crash when an attempt to remove an xattr returns EACCES. Build system ------------ * The tests can now be run in parallel (and possibly much more quickly) via `make -j check`. * The build system now creates and uses cmd/bup-python which refers to the `./configure` selected python. Thanks to (at least) ==================== Aidan Hobson Sayers, Ben Kelly, Ben Wiederhake, Brandon Smith, Brian Minton, David Kettler, Frank Gevaerts, Gabriel Filion, Greg Troxel, James Lott, Karl-Philipp Richter, Luis Sanchez Sanchez, Marcus Schopen, Mark J Hewitt, Markus, Mathieu Schroeter, Michael March, Nimen Nachname, Nix, Patrick Rouleau, Paul Kronenwetter, Rob Browning, Robert Edmonds, Simon Persson, Tadej Janež, Thomas Klausner, Tilo Schwarz, Tim Riemenschneider, Wayne Scott, pspdevel, and stevelr bup-0.29/note/0.28.1-from-0.28.md000066400000000000000000000007571303127641400156070ustar00rootroot00000000000000 Notable changes in 0.28.1 as compared to 0.28 ============================================= General ------- * Builds from unpacked release archives (created via "git archive TAG") should work again. Build system ------------ * test-web.sh and test-meta.sh should now work on newer versions of OS X, and with Homebrew rsync. * cmd/bup-python's permissions should now respect the umask. Thanks to (at least) ==================== Gernot Schulz, Karl Semich, Rob Browning, and ilovezfs bup-0.29/note/0.29-from-0.28.1.md000066400000000000000000000044531303127641400156050ustar00rootroot00000000000000 Notable changes in 0.29 as compared to 0.28.1 ============================================= May require attention --------------------- * The minimum Python version is now to 2.6. * The index format has been adjusted to handle a larger number of entries, which will trigger a full index rebuild on the next index update, making that run more expensive than usual. * The `gc` command should now clean up its temporary bloom filters, but filters created by earlier invocations may still exist in your repositories in the objects/pack/ directory as tmp-gc-*.bloom files, It should be safe to delete these files when no bup commands are running. General ------- * Some Python 2.6 compatibility problems have been fixed. * `index` runs may be much less expensive for parts of the filesystem that haven't changed since the last save. * An experimental `prune-older` command has been added. It removes (permanently deletes) all saves except those preserved by a set of arguments like `--keep-monthlies-for 3y`. See `bup help prune-older` for further information. * `gc` should now only require up to one packfile (about 1GB) of temporary space while running. Previously it might require much more. * `gc` should read much less data now, which may make it notably faster. * The `gc` `--threshold` argument should actually be allowed now. * `gc` should be able to handle deeper filesystem trees without crashing. Previously it was constrained by the default Python stack depth limit. * `save` and `split` should reject invalid `-n` names immediately instead of waiting until after their work is complete. * bup should no longer crash when trying to fsync on an SMB filesystem under OS X. * `save` and `restore` should work on ntfs-3g filesystems now. Previously they might crash when trying to manipulate file attrs. Build system ------------ * The web tests should be skipped if tornado is not detected. * The fuse tests should be skipped if the fuse module is not detected. * `make clean` should work better on non-Linux systems. Thanks to (at least) ==================== Andrew Skretvedt, Ben Kelly, Bruno Bigras, Greg Troxel, Jacob Edelman, Jonathan Wright, Julien Sanchez, Mark J Hewitt, Nick Alcock, Pascal Honoré, Rob Browning, Wayne Scott, axion, ilovezfs, phillipproell, and vi0oss bup-0.29/t/000077500000000000000000000000001303127641400124765ustar00rootroot00000000000000bup-0.29/t/cleanup-mounts-under000077500000000000000000000025531303127641400165160ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ from sys import stderr import os.path, re, subprocess, sys def mntent_unescape(x): def replacement(m): unescapes = { "\\\\" : "\\", "\\011" : "\t", "\\012" : "\n", "\\040" : " " } return unescapes.get(m.group(0)) return re.sub(r'(\\\\|\\011|\\012|\\040)', replacement, x) targets = sys.argv[1:] if not os.path.exists('/proc/mounts'): print >> stderr, 'No /proc/mounts; skipping mount cleanup in', repr(targets) sys.exit(0) exit_status = 0 for target in targets: if not os.path.isdir(target): print >> stderr, repr(target), 'is not a directory' exit_status = 1 continue top = os.path.realpath(target) proc_mounts = open('/proc/mounts', 'r') for line in proc_mounts: _, point, fstype, _ = line.split(' ', 3) point = mntent_unescape(point) if top == point or os.path.commonprefix((top + '/', point)) == top + '/': if fstype.startswith('fuse'): if subprocess.call(['fusermount', '-uz', point]) != 0: exit_status = 1 else: if subprocess.call(['umount', '-l', point]) != 0: exit_status = 1 sys.exit(exit_status) bup-0.29/t/compare-trees000077500000000000000000000036721303127641400152020ustar00rootroot00000000000000#!/usr/bin/env bash set -u # Test that src and dest trees are as identical as bup is capable of # making them. For now, use rsync -niaHAX ... usage() { cat <&2; exit 1;; esac done shift $(($OPTIND - 1)) || exit $? if ! test $# -eq 2 then usage 1>&2 exit 1 fi src="$1" dest="$2" tmpfile="$(mktemp /tmp/bup-test-XXXXXXX)" || exit $? trap "rm -rf '$tmpfile'" EXIT || exit $? rsync_opts="-niaH$verify_content --delete" rsync_version=$(rsync --version) if [[ ! "$rsync_version" =~ "ACLs" ]] || [[ "$rsync_version" =~ "no ACLs" ]]; then echo "Not comparing ACLs (not supported by available rsync)" 1>&2 else case $OSTYPE in cygwin|darwin|netbsd) echo "Not comparing ACLs (not yet supported on $OSTYPE)" 1>&2 ;; *) rsync_opts="$rsync_opts -A" ;; esac fi xattrs_available='' if [[ ! "$rsync_version" =~ "xattrs" ]] || [[ "$rsync_version" =~ "no xattrs" ]]; then echo "Not comparing xattrs (not supported by available rsync)" 1>&2 else xattrs_available=yes fi # Even in dry-run mode, rsync may fail if -X is specified and the # filesystems don't support xattrs. if test "$xattrs_available"; then rsync $rsync_opts -X "$src" "$dest" > "$tmpfile" if test $? -ne 0; then # Try again without -X rsync $rsync_opts "$src" "$dest" > "$tmpfile" || exit $? fi else rsync $rsync_opts "$src" "$dest" > "$tmpfile" || exit $? fi if test $(wc -l < "$tmpfile") != 0; then echo "Differences between $src and $dest" 1>&2 cat "$tmpfile" exit 1 fi exit 0 bup-0.29/t/configure-sampledata000077500000000000000000000031361303127641400165210ustar00rootroot00000000000000#!/usr/bin/env bash set -o pipefail # NOTE: any relevant changes to var/ must be accompanied by an # increment to the revision. revision=1 readonly revision top="$(pwd)" || exit $? usage() { echo 'Usage: t/configure-sampledata [--setup | --clean | --revision]' } if test "$#" -ne 1; then usage 1>&2; exit 1 fi rm_symlinks() { for p in "$@"; do # test -e is false for dangling symlinks. if test -h "$p" -o -e "$p"; then rm "$p" || exit $?; fi done } clean() ( cd t/sampledata || exit $? if test -e var; then rm -r var || exit $?; fi # Remove legacy content (before everything moved to var/). rm_symlinks abs-symlink b c etc ) case "$1" in --setup) ( clean mkdir -p t/sampledata/var/rev || exit $? cd t/sampledata/var || exit $? ln -sf a b || exit $? ln -sf b c || exit $? ln -sf "$(pwd)/abs-symlink-target" abs-symlink || exit $? mkdir -p cmd doc lib/bup || exit $? cp -pP "$top"/cmd/*.py cmd/ || exit $? cp -pP "$top"/Documentation/*.md doc/ || exit $? cp -pP "$top"/lib/bup/*.py lib/bup || exit $? # The "v" ensures that if "configure-sampledata # --revision" and/or the setup above fails somehow, # callers like make will be looking for a file that won't # exist. touch rev/v$revision || exit $? ) || exit $? ;; --clean) clean ;; --revision) echo "$revision" || exit $? ;; *) usage 1>&2; exit 1 ;; esac bup-0.29/t/data-size000077500000000000000000000011101303127641400142760ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble from os.path import getsize, isdir from sys import argv, stderr import os def listdir_failure(ex): raise ex def usage(): print >> stderr, "Usage: data-size PATH ..." total = 0 for path in argv[1:]: if isdir(path): for root, dirs, files in os.walk(path, onerror=listdir_failure): total += sum(getsize(os.path.join(root, name)) for name in files) else: total += getsize(path) print total bup-0.29/t/force-delete000077500000000000000000000011651303127641400147650ustar00rootroot00000000000000#!/usr/bin/env bash set -o pipefail # Try *hard* to delete $@. Among other things, some systems have # r-xr-xr-x for root and other system dirs. rc=0 rm -rf "$@" # Maybe we'll get lucky. for f in "$@"; do test -e "$f" || continue if test "$(type -p setfacl)"; then setfacl -Rb "$f" fi if test "$(type -p chattr)"; then chattr -R -aisu "$f" fi chmod -R u+rwX "$f" rm -r "$f" if test -e "$f"; then rc=1 find "$f" -ls lsattr -aR "$f" getfacl -R "$f" fi done if test "$rc" -ne 0; then echo "Failed to delete everything" 1>&2 fi exit "$rc" bup-0.29/t/git-cat-tree000077500000000000000000000017011303127641400147100ustar00rootroot00000000000000#!/usr/bin/env bash # Recursively dump all blobs in the subtree identified by ID. set -o pipefail usage() { cat <&2 exit 1 ;; esac } if test $# -ne 1 then usage 1>&2 exit 1 fi top="$1" type=$(git cat-file -t "$top") || exit $? cat-item "$top" "$type" bup-0.29/t/hardlink-sets000077500000000000000000000026021303127641400151740ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import os, stat, sys # Print the full paths of all the files in each hardlink set # underneath one of the paths. Separate sets with a blank line, sort # the paths within each set, and sort the sets by their first path. def usage(): print >> sys.stderr, "Usage: hardlink-sets " if len(sys.argv) < 2: usage() sys.exit(1) def on_walk_error(e): raise e hardlink_set = {} for p in sys.argv[1:]: for root, dirs, files in os.walk(p, onerror = on_walk_error): for filename in files: full_path = os.path.join(root, filename) st = os.lstat(full_path) if not stat.S_ISDIR(st.st_mode): node = '%s:%s' % (st.st_dev, st.st_ino) link_paths = hardlink_set.get(node) if link_paths: link_paths.append(full_path) else: hardlink_set[node] = [full_path] # Sort the link sets. for node, link_paths in hardlink_set.items(): link_paths.sort() first_set = True for link_paths in sorted(hardlink_set.values(), key = lambda x : x[0]): if len(link_paths) > 1: if first_set: first_set = False else: print for p in sorted(link_paths): print p sys.exit(0) bup-0.29/t/id-other-than000077500000000000000000000021211303127641400150630ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import grp import pwd import sys def usage(): print >> sys.stderr, "Usage: id-other-than <--user|--group> ID [ID ...]" if len(sys.argv) < 2: usage() sys.exit(1) def is_integer(x): try: int(x) return True except ValueError, e: return False excluded_ids = set(int(x) for x in sys.argv[2:] if is_integer(x)) excluded_names = (x for x in sys.argv[2:] if not is_integer(x)) if sys.argv[1] == '--user': for x in excluded_names: excluded_ids.add(pwd.getpwnam(x).pw_uid) for x in pwd.getpwall(): if x.pw_uid not in excluded_ids: print x.pw_name + ':' + str(x.pw_uid) sys.exit(0) elif sys.argv[1] == '--group': for x in excluded_names: excluded_ids.add(grp.getgrnam(x).gr_gid) for x in grp.getgrall(): if x.gr_gid not in excluded_ids: print x.gr_name + ':' + str(x.gr_gid) sys.exit(0) else: usage() sys.exit(1) bup-0.29/t/lib.sh000066400000000000000000000024011303127641400135750ustar00rootroot00000000000000# Assumes shell is Bash, and pipefail is set. bup_t_lib_script_home=$(cd "$(dirname $0)" && pwd) || exit $? bup-python() { "$bup_t_lib_script_home/../cmd/bup-python" "$@" } force-delete() { "$bup_t_lib_script_home/force-delete" "$@" } resolve-parent() { test "$#" -eq 1 || return $? echo "$1" | \ PYTHONPATH="$bup_t_lib_script_home/../lib" bup-python -c \ "import sys, bup.helpers; print bup.helpers.resolve_parent(sys.stdin.readline())" \ || return $? } current-filesystem() { local kernel="$(uname -s)" || return $? case "$kernel" in NetBSD) df -G . | sed -En 's/.* ([^ ]*) fstype.*/\1/p' ;; SunOS) df -g . | sed -En 's/.* ([^ ]*) fstype.*/\1/p' ;; *) df -T . | awk 'END{print $2}' esac } path-filesystems() ( # Return filesystem for each dir from $1 to /. # Perhaps for /foo/bar, "ext4\next4\nbtrfs\n". test "$#" -eq 1 || exit $? cd "$1" || exit $? current-filesystem || exit $? dir="$(pwd)" || exit $? while test "$dir" != /; do cd .. || exit $? dir="$(pwd)" || exit $? current-filesystem || exit $? done exit 0 ) escape-erx() { sed 's/[][\.|$(){?+*^]/\\&/g' <<< "$*" } bup-0.29/t/mksock000077500000000000000000000003671303127641400137210ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import socket, sys s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM, 0) s.bind(sys.argv[1]) bup-0.29/t/ns-timestamp-resolutions000077500000000000000000000023151303127641400174320ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import os, sys argv = sys.argv exe = os.path.realpath(argv[0]) exepath = os.path.split(exe)[0] or '.' exeprefix = os.path.split(os.path.abspath(exepath))[0] # fix the PYTHONPATH to include our lib dir libpath = os.path.join(exepath, '..', 'lib') sys.path[:0] = [libpath] os.environ['PYTHONPATH'] = libpath + ':' + os.environ.get('PYTHONPATH', '') import bup.xstat as xstat from bup.helpers import handle_ctrl_c, saved_errors from bup import metadata, options optspec = """ ns-timestamp-resolutions TEST_FILE_NAME -- """ handle_ctrl_c() o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) != 1: o.fatal('must specify a test file name') target = extra[0] open(target, 'w').close() xstat.utime(target, (123456789, 123456789)) meta = metadata.from_path(target) def ns_resolution(x): n = 1; while n < 10**9 and x % 10 == 0: x /= 10 n *= 10 return n print ns_resolution(meta.atime), ns_resolution(meta.mtime) if saved_errors: log('warning: %d errors encountered\n' % len(saved_errors)) sys.exit(1) bup-0.29/t/root-status000077500000000000000000000013411303127641400147270ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble from sys import stderr import sys if sys.platform.startswith('cygwin'): if sys.getwindowsversion()[0] > 5: # Sounds like the situation is much more complicated here print >> stderr, "can't detect root status for OS version > 5; assuming not root" print 'none' import ctypes if ctypes.cdll.shell32.IsUserAnAdmin(): print 'root' else: print 'none' else: import os if os.environ.get('FAKEROOTKEY'): print 'fake' else: if os.geteuid() == 0: print 'root' else: print 'none' bup-0.29/t/sampledata/000077500000000000000000000000001303127641400146115ustar00rootroot00000000000000bup-0.29/t/sampledata/b2/000077500000000000000000000000001303127641400151145ustar00rootroot00000000000000bup-0.29/t/sampledata/b2/foozy000066400000000000000000000000001303127641400161730ustar00rootroot00000000000000bup-0.29/t/sampledata/b2/foozy2000066400000000000000000000000001303127641400162550ustar00rootroot00000000000000bup-0.29/t/sampledata/x000066400000000000000000000000351303127641400150010ustar00rootroot00000000000000Sun Jan 3 01:54:26 EST 2010 bup-0.29/t/sampledata/y-2000000066400000000000000000000001251303127641400153610ustar00rootroot00000000000000this file should come *before* y/ in the sort order, because of that trailing slash. bup-0.29/t/sampledata/y/000077500000000000000000000000001303127641400150615ustar00rootroot00000000000000bup-0.29/t/sampledata/y/testfile1000066400000000000000000004657101303127641400167210ustar00rootroot00000000000000#!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: jevgr_gb_svyr(cjq.erfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) sq = bf.bcra('/qri/ahyy', bf.B_EQBAYL) bf.qhc2(sq, 0) bf.pybfr(sq) bf.raiveba['OHC_FREIRE_ERIREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu #!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: jevgr_gb_svyr(cjq.erfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) sq = bf.bcra('/qri/ahyy', bf.B_EQBAYL) bf.qhc2(sq, 0) bf.pybfr(sq) bf.raiveba['OHC_FREIRE_ERIREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu #!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: jevgr_gb_svyr(cjq.erfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) sq = bf.bcra('/qri/ahyy', bf.B_EQBAYL) bf.qhc2(sq, 0) bf.pybfr(sq) bf.raiveba['OHC_FREIRE_ERIREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu bup-0.29/t/sampledata/y/text000066400000000000000000000000471303127641400157710ustar00rootroot00000000000000this is a text file. See me be texty! bup-0.29/t/sparse-test-data000077500000000000000000000051661303127641400156150ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ from random import randint from sys import stderr, stdout import sys def smaller_region(max_offset): start = randint(0, max_offset) return (start, min(max_offset, randint(start + 1, start + 5))) def possibly_larger_region(max_offset, min_sparse_len): start = randint(0, max_offset) return (start, min(max_offset, randint(start + 1, start + 3 * min_sparse_len))) def initial_region(max_offset, min_sparse_len): start = 0 return (start, min(max_offset, randint(start + 1, start + 3 * min_sparse_len))) def final_region(max_offset, min_sparse_len): start = max(0, randint(max_offset - 3 * min_sparse_len, max_offset - 1)) return (start, max_offset) def region_around_min_len(max_offset, min_sparse_len): start = randint(0, max_offset) return (start, min(max_offset, randint(start + min_sparse_len - 5, start + min_sparse_len + 5))) generators = [] def random_region(): global generators return generators[randint(0, len(generators) - 1)]() out = stdout if len(sys.argv) == 2: out = open(sys.argv[1], 'wb') elif len(sys.argv): print >> stderr, "Usage: sparse-test-data [FILE]" bup_read_size = 2 ** 16 bup_min_sparse_len = 512 out_size = randint(0, bup_read_size * 10) generators = (lambda : smaller_region(out_size), lambda : possibly_larger_region(out_size, bup_min_sparse_len), lambda : initial_region(out_size, bup_min_sparse_len), lambda : final_region(out_size, bup_min_sparse_len), lambda : region_around_min_len(out_size, bup_min_sparse_len)) sparse = [] sparse.append(random_region()) sparse.append(random_region()) # Handle overlaps if sparse[1][0] < sparse[0][0]: sparse[0], sparse[1] = sparse[1], sparse[0] sparse_offsets = [] sparse_offsets.append(sparse[0][0]) if sparse[1][0] <= sparse[0][1]: sparse_offsets.append(max(sparse[0][1], sparse[1][1])) else: sparse_offsets.extend((sparse[0][1], sparse[1][0], sparse[1][1])) if sparse[1][1] != out_size: sparse_offsets.append(out_size) # Now sparse_offsets indicates where to start/stop zero runs data = 'x' pos = 0 print >> stderr, 'offsets:', sparse_offsets for offset in sparse_offsets: count = offset - pos print >> stderr, 'write:', 'x' if data == 'x' else '0', count out.write(data * (offset - pos)) pos += count data = '\0' if data == 'x' else 'x' out.close() bup-0.29/t/subtree-hash000077500000000000000000000025451303127641400150240ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import os, sys argv = sys.argv exe = os.path.realpath(argv[0]) exepath = os.path.split(exe)[0] or '.' exeprefix = os.path.split(os.path.abspath(exepath))[0] # fix the PYTHONPATH to include our lib dir libpath = os.path.join(exepath, '..', 'lib') sys.path[:0] = [libpath] os.environ['PYTHONPATH'] = libpath + ':' + os.environ.get('PYTHONPATH', '') from bup.helpers import handle_ctrl_c, readpipe from bup import options optspec = """ subtree-hash ROOT_HASH [PATH_ITEM...] -- """ handle_ctrl_c() o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if len(extra) < 1: o.fatal('must specify a root hash') tree_hash = extra[0] path = extra[1:] while path: target_name = path[0] subtree_items = readpipe(['git', 'ls-tree', '-z', tree_hash]) target_hash = None for entry in subtree_items.split('\0'): if not entry: break info, name = entry.split('\t', 1) if name == target_name: _, _, target_hash = info.split(' ') break if not target_hash: print >> sys.stderr, "Can't find %r in %s" % (target_name, tree_hash) break tree_hash = target_hash path = path[1:] if path: sys.exit(1) print tree_hash bup-0.29/t/sync-tree000077500000000000000000000025471303127641400143450ustar00rootroot00000000000000#!/usr/bin/env bash set -u usage() { cat <&2; exit 1;; esac done shift $(($OPTIND - 1)) || exit $? if ! test $# -eq 2 then usage 1>&2 exit 1 fi src="$1" dest="$2" rsync_opts="-aH --delete" rsync_version=$(rsync --version) if [[ ! "$rsync_version" =~ "ACLs" ]] || [[ "$rsync_version" =~ "no ACLs" ]]; then echo "Not syncing ACLs (not supported by available rsync)" 1>&2 else case $OSTYPE in cygwin|darwin|netbsd) echo "Not syncing ACLs (not yet supported on $OSTYPE)" 1>&2 ;; *) rsync_opts="$rsync_opts -A" ;; esac fi xattrs_available='' if [[ ! "$rsync_version" =~ "xattrs" ]] || [[ "$rsync_version" =~ "no xattrs" ]]; then echo "Not syncing xattrs (not supported by available rsync)" 1>&2 else xattrs_available=yes fi # rsync may fail if -X is specified and the filesystems don't support # xattrs. if test "$xattrs_available"; then rsync $rsync_opts -X "$src" "$dest" if test $? -ne 0; then # Try again without -X exec rsync $rsync_opts "$src" "$dest" fi else exec rsync $rsync_opts "$src" "$dest" fi bup-0.29/t/test-cat-file.sh000077500000000000000000000026221303127641400155000ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS bup init WVPASS cd "$tmpdir" WVSTART "cat-file" WVPASS mkdir src WVPASS date > src/foo WVPASS bup index src WVPASS bup save -n src src WVPASS bup cat-file "src/latest/$(pwd)/src/foo" > cat-foo WVPASS diff -u src/foo cat-foo WVSTART "cat-file --meta" WVPASS bup meta --create --no-paths src/foo > src-foo.meta WVPASS bup cat-file --meta "src/latest/$(pwd)/src/foo" > cat-foo.meta WVPASS bup meta -tvvf src-foo.meta | WVPASS grep -vE '^atime: ' > src-foo.list WVPASS bup meta -tvvf cat-foo.meta | WVPASS grep -vE '^atime: ' > cat-foo.list WVPASS diff -u src-foo.list cat-foo.list WVSTART "cat-file --bupm" WVPASS bup cat-file --bupm "src/latest/$(pwd)/src/" > bup-cat-bupm src_hash=$(WVPASS bup ls -s "src/latest/$(pwd)" | cut -d' ' -f 1) || exit $? bupm_hash=$(WVPASS git ls-tree "$src_hash" | grep -F .bupm | cut -d' ' -f 3) \ || exit $? bupm_hash=$(WVPASS echo "$bupm_hash" | cut -d' ' -f 1) || exit $? WVPASS "$top/t/git-cat-tree" "$bupm_hash" > git-cat-bupm if ! cmp git-cat-bupm bup-cat-bupm; then cmp -l git-cat-bupm bup-cat-bupm diff -uN <(bup meta -tvvf git-cat-bupm) <(bup meta -tvvf bup-cat-bupm) WVPASS cmp git-cat-bupm bup-cat-bupm fi WVPASS rm -rf "$tmpdir" bup-0.29/t/test-command-without-init-fails.sh000077500000000000000000000005261303127641400211710ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail WVSTART 'all' top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS mkdir "$tmpdir/foo" bup index "$tmpdir/foo" &> /dev/null index_rc=$? WVPASSEQ "$index_rc" "15" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-compression.sh000077500000000000000000000024261303127641400163570ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } fs-size() { tar cf - "$@" | wc -c; } WVSTART "compression" WVPASS cd "$tmpdir" D=compression0.tmp WVPASS force-delete "$BUP_DIR" WVPASS bup init WVPASS mkdir $D WVPASS bup index "$top/Documentation" WVPASS bup save -n compression -0 --strip "$top/Documentation" # Some platforms set -A by default when root, so just use it everywhere. expected="$(WVPASS ls -A "$top/Documentation" | WVPASS sort)" || exit $? actual="$(WVPASS bup ls -A compression/latest/ | WVPASS sort)" || exit $? WVPASSEQ "$actual" "$expected" compression_0_size=$(WVPASS fs-size "$BUP_DIR") || exit $? D=compression9.tmp WVPASS force-delete "$BUP_DIR" WVPASS bup init WVPASS mkdir $D WVPASS bup index "$top/Documentation" WVPASS bup save -n compression -9 --strip "$top/Documentation" expected="$(ls -A "$top/Documentation" | sort)" || exit $? actual="$(bup ls -A compression/latest/ | sort)" || exit $? WVPASSEQ "$actual" "$expected" compression_9_size=$(WVPASS fs-size "$BUP_DIR") || exit $? WVPASS [ "$compression_9_size" -lt "$compression_0_size" ] WVPASS rm -rf "$tmpdir" bup-0.29/t/test-drecurse.sh000077500000000000000000000035651303127641400156370ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS cd "$tmpdir" # These tests aren't comprehensive, but test-save-restore-excludes.sh # exercises some of the same code more thoroughly via index, and # --xdev is handled in test-xdev.sh. WVSTART "drecurse" WVPASS bup init WVPASS mkdir src src/a src/b WVPASS touch src/a/1 src/a/2 src/b/1 src/b/2 src/c (cd src && WVPASS ln -s a a-link) WVPASSEQ "$(bup drecurse src)" "src/c src/b/2 src/b/1 src/b/ src/a/2 src/a/1 src/a/ src/a-link src/" WVSTART "drecurse --exclude (file)" WVPASSEQ "$(bup drecurse --exclude src/b/2 src)" "src/c src/b/1 src/b/ src/a/2 src/a/1 src/a/ src/a-link src/" WVSTART "drecurse --exclude (dir)" WVPASSEQ "$(bup drecurse --exclude src/b/ src)" "src/c src/a/2 src/a/1 src/a/ src/a-link src/" WVSTART "drecurse --exclude (symlink)" WVPASSEQ "$(bup drecurse --exclude src/a-link src)" "src/c src/b/2 src/b/1 src/b/ src/a/2 src/a/1 src/a/ src/" WVSTART "drecurse --exclude (absolute path)" WVPASSEQ "$(bup drecurse --exclude src/b/2 "$(pwd)/src")" "$(pwd)/src/c $(pwd)/src/b/1 $(pwd)/src/b/ $(pwd)/src/a/2 $(pwd)/src/a/1 $(pwd)/src/a/ $(pwd)/src/a-link $(pwd)/src/" WVSTART "drecurse --exclude-from" WVPASS echo "src/b" > exclude-list WVPASSEQ "$(bup drecurse --exclude-from exclude-list src)" "src/c src/a/2 src/a/1 src/a/ src/a-link src/" WVSTART "drecurse --exclude-rx (trivial)" WVPASSEQ "$(bup drecurse --exclude-rx '^src/b' src)" "src/c src/a/2 src/a/1 src/a/ src/a-link src/" WVSTART "drecurse --exclude-rx (trivial - absolute path)" WVPASSEQ "$(bup drecurse --exclude-rx "^$(pwd)/src/b" "$(pwd)/src")" \ "$(pwd)/src/c $(pwd)/src/a/2 $(pwd)/src/a/1 $(pwd)/src/a/ $(pwd)/src/a-link $(pwd)/src/" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-fsck.sh000077500000000000000000000030571303127641400147450ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? bup() { "$top/bup" "$@"; } WVPASS "$top/t/sync-tree" "$top/t/sampledata/" "$tmpdir/src/" export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" WVPASS bup init WVPASS cd "$tmpdir" WVSTART "fsck" WVPASS bup index src WVPASS bup save -n fsck-test src/b2 WVPASS bup save -n fsck-test src/var/cmd WVPASS bup save -n fsck-test src/var/doc WVPASS bup save -n fsck-test src/var/lib WVPASS bup save -n fsck-test src/y WVPASS bup fsck WVPASS bup fsck --quick if bup fsck --par2-ok; then WVSTART "fsck (par2)" else WVSTART "fsck (PAR2 IS MISSING)" fi WVPASS bup fsck -g WVPASS bup fsck -r WVPASS bup damage "$BUP_DIR"/objects/pack/*.pack -n10 -s1 -S0 WVFAIL bup fsck --quick WVFAIL bup fsck --quick --disable-par2 WVPASS chmod u+w "$BUP_DIR"/objects/pack/*.idx WVPASS bup damage "$BUP_DIR"/objects/pack/*.idx -n10 -s1 -S0 WVFAIL bup fsck --quick -j4 WVPASS bup damage "$BUP_DIR"/objects/pack/*.pack -n10 -s1024 --percent 0.4 -S0 WVFAIL bup fsck --quick WVFAIL bup fsck --quick -rvv -j99 # fails because repairs were needed if bup fsck --par2-ok; then WVPASS bup fsck -r # ok because of repairs from last time WVPASS bup damage "$BUP_DIR"/objects/pack/*.pack -n202 -s1 --equal -S0 WVFAIL bup fsck WVFAIL bup fsck -rvv # too many errors to be repairable WVFAIL bup fsck -r # too many errors to be repairable else WVFAIL bup fsck --quick -r # still fails because par2 was missing fi WVPASS rm -rf "$tmpdir" bup-0.29/t/test-fuse.sh000077500000000000000000000052221303127641400147550ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail unset BLOCKSIZE BLOCK_SIZE DF_BLOCK_SIZE root_status="$(t/root-status)" || exit $? if ! bup-python -c 'import fuse' 2> /dev/null; then WVSTART 'unable to import fuse; skipping test' exit 0 fi if test -n "$(type -p modprobe)" && ! modprobe fuse; then echo 'Unable to load fuse module; skipping dependent tests.' 1>&2 exit 0 fi if ! fusermount -V; then echo 'skipping FUSE tests: fusermount does not appear to work' exit 0 fi if ! groups | grep -q fuse && test "$root_status" != root; then echo 'skipping FUSE tests: you are not root and not in the fuse group' exit 0 fi top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } # Some versions of bash's printf don't support the relevant date expansion. savename() { readonly secs="$1" WVPASS bup-python -c "from time import strftime, localtime; \ print strftime('%Y-%m-%d-%H%M%S', localtime($secs))" } WVPASS bup init WVPASS cd "$tmpdir" savestamp1=$(WVPASS bup-python -c 'import time; print int(time.time())') || exit $? savestamp2=$(($savestamp1 + 1)) savename1="$(savename "$savestamp1")" || exit $? savename2="$(savename "$savestamp2")" || exit $? WVPASS mkdir src WVPASS echo content > src/foo WVPASS chmod 644 src/foo WVPASS touch -t 201111111111 src/foo # FUSE, python-fuse, something, can't handle negative epoch times. # Use pre-epoch to make sure bup properly "bottoms out" at 0 for now. WVPASS echo content > src/pre-epoch WVPASS chmod 644 src/pre-epoch WVPASS touch -t 196907202018 src/pre-epoch WVPASS bup index src WVPASS bup save -n src -d "$savestamp1" --strip src WVSTART "basics" WVPASS mkdir mnt WVPASS bup fuse mnt result=$(WVPASS ls mnt) || exit $? WVPASSEQ src "$result" result=$(WVPASS ls mnt/src) || exit $? WVPASSEQ "$result" "$savename1 latest" result=$(WVPASS ls mnt/src/latest) || exit $? WVPASSEQ "$result" "foo pre-epoch" result=$(WVPASS cat mnt/src/latest/foo) || exit $? WVPASSEQ "$result" "content" # Right now we don't detect new saves. WVPASS bup save -n src -d "$savestamp2" --strip src result=$(WVPASS ls mnt/src) || exit $? WVPASSEQ "$result" "$savename1 latest" WVPASS fusermount -uz mnt WVSTART "extended metadata" WVPASS bup fuse --meta mnt result=$(TZ=UTC LC_ALL=C WVPASS ls -l mnt/src/latest/) || exit $? readonly user=$(WVPASS id -un) || $? readonly group=$(WVPASS id -gn) || $? WVPASSEQ "$result" "total 1 -rw-r--r-- 1 $user $group 8 Nov 11 2011 foo -rw-r--r-- 1 $user $group 8 Jan 1 1970 pre-epoch" WVPASS fusermount -uz mnt WVPASS rm -rf "$tmpdir" bup-0.29/t/test-gc.sh000077500000000000000000000140651303127641400144110ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" GC_OPTS=--unsafe bup() { "$top/bup" "$@"; } compare-trees() { "$top/t/compare-trees" "$@"; } data-size() { "$top/t/data-size" "$@"; } WVPASS cd "$tmpdir" WVPASS bup init WVSTART "gc (unchanged repo)" WVPASS mkdir src-1 WVPASS bup random 1k > src-1/1 WVPASS bup index src-1 WVPASS bup save --strip -n src-1 src-1 WVPASS bup gc $GC_OPTS -v WVPASS bup restore -C "$tmpdir/restore" /src-1/latest WVPASS compare-trees src-1/ "$tmpdir/restore/latest/" WVSTART "gc (unchanged, new branch)" WVPASS mkdir src-2 WVPASS bup random 10M > src-2/1 WVPASS bup index src-2 WVPASS bup save --strip -n src-2 src-2 WVPASS bup gc $GC_OPTS -v WVPASS rm -r "$tmpdir/restore" WVPASS bup restore -C "$tmpdir/restore" /src-1/latest WVPASS compare-trees src-1/ "$tmpdir/restore/latest/" WVPASS rm -r "$tmpdir/restore" WVPASS bup restore -C "$tmpdir/restore" /src-2/latest WVPASS compare-trees src-2/ "$tmpdir/restore/latest/" WVSTART "gc (removed branch)" size_before=$(WVPASS data-size "$BUP_DIR") || exit $? WVPASS rm "$BUP_DIR/refs/heads/src-2" WVPASS bup gc $GC_OPTS -v size_after=$(WVPASS data-size "$BUP_DIR") || exit $? WVPASS [ "$size_before" -gt 5000000 ] WVPASS [ "$size_after" -lt 50000 ] WVPASS rm -r "$tmpdir/restore" WVPASS bup restore -C "$tmpdir/restore" /src-1/latest WVPASS compare-trees src-1/ "$tmpdir/restore/latest/" WVPASS rm -r "$tmpdir/restore" WVFAIL bup restore -C "$tmpdir/restore" /src-2/latest WVPASS mkdir src-ab-clean src-ab-clean/a src-ab-clean/b WVPASS bup random 1k > src-ab-clean/a/1 WVPASS bup random 10M > src-ab-clean/b/1 WVSTART "gc (rewriting)" WVPASS rm -rf "$BUP_DIR" WVPASS bup init WVPASS rm -rf src-ab WVPASS cp -pPR src-ab-clean src-ab WVPASS bup index src-ab WVPASS bup save --strip -n src-ab src-ab WVPASS bup index --clear WVPASS bup index src-ab WVPASS bup save -vvv --strip -n a src-ab/a size_before=$(WVPASS data-size "$BUP_DIR") || exit $? WVPASS rm "$BUP_DIR/refs/heads/src-ab" WVPASS bup gc $GC_OPTS -v size_after=$(WVPASS data-size "$BUP_DIR") || exit $? WVPASS [ "$size_before" -gt 5000000 ] WVPASS [ "$size_after" -lt 100000 ] WVPASS rm -r "$tmpdir/restore" WVPASS bup restore -C "$tmpdir/restore" /a/latest WVPASS compare-trees src-ab/a/ "$tmpdir/restore/latest/" WVPASS rm -r "$tmpdir/restore" WVFAIL bup restore -C "$tmpdir/restore" /src-ab/latest WVSTART "gc (save -r after repo rewriting)" WVPASS rm -rf "$BUP_DIR" WVPASS bup init WVPASS bup -d bup-remote init WVPASS rm -rf src-ab WVPASS cp -pPR src-ab-clean src-ab WVPASS bup index src-ab WVPASS bup save -r :bup-remote --strip -n src-ab src-ab WVPASS bup index --clear WVPASS bup index src-ab WVPASS bup save -r :bup-remote -vvv --strip -n a src-ab/a size_before=$(WVPASS data-size bup-remote) || exit $? WVPASS rm bup-remote/refs/heads/src-ab WVPASS bup -d bup-remote gc $GC_OPTS -v size_after=$(WVPASS data-size bup-remote) || exit $? WVPASS [ "$size_before" -gt 5000000 ] WVPASS [ "$size_after" -lt 100000 ] WVPASS rm -rf "$tmpdir/restore" WVPASS bup -d bup-remote restore -C "$tmpdir/restore" /a/latest WVPASS compare-trees src-ab/a/ "$tmpdir/restore/latest/" WVPASS rm -r "$tmpdir/restore" WVFAIL bup -d bup-remote restore -C "$tmpdir/restore" /src-ab/latest # Make sure a post-gc index/save that includes gc-ed data works WVPASS bup index src-ab WVPASS bup save -r :bup-remote --strip -n src-ab src-ab WVPASS rm -r "$tmpdir/restore" WVPASS bup -d bup-remote restore -C "$tmpdir/restore" /src-ab/latest WVPASS compare-trees src-ab/ "$tmpdir/restore/latest/" WVSTART "gc (bup on after repo rewriting)" WVPASS rm -rf "$BUP_DIR" WVPASS bup init WVPASS rm -rf src-ab WVPASS cp -pPR src-ab-clean src-ab WVPASS bup on - index src-ab WVPASS bup on - save --strip -n src-ab src-ab WVPASS bup index --clear WVPASS bup on - index src-ab WVPASS bup on - save -vvv --strip -n a src-ab/a size_before=$(WVPASS data-size "$BUP_DIR") || exit $? WVPASS rm "$BUP_DIR/refs/heads/src-ab" WVPASS bup gc $GC_OPTS -v size_after=$(WVPASS data-size "$BUP_DIR") || exit $? WVPASS [ "$size_before" -gt 5000000 ] WVPASS [ "$size_after" -lt 100000 ] WVPASS rm -r "$tmpdir/restore" WVPASS bup restore -C "$tmpdir/restore" /a/latest WVPASS compare-trees src-ab/a/ "$tmpdir/restore/latest/" WVPASS rm -r "$tmpdir/restore" WVFAIL bup restore -C "$tmpdir/restore" /src-ab/latest # Make sure a post-gc index/save that includes gc-ed data works WVPASS bup on - index src-ab WVPASS bup on - save --strip -n src-ab src-ab WVPASS rm -r "$tmpdir/restore" WVPASS bup restore -C "$tmpdir/restore" /src-ab/latest WVPASS compare-trees src-ab/ "$tmpdir/restore/latest/" WVSTART "gc (threshold)" WVPASS rm -rf "$BUP_DIR" WVPASS bup init WVPASS rm -rf src && mkdir src WVPASS echo 0 > src/0 WVPASS echo 1 > src/1 WVPASS bup index src WVPASS bup save -n src-1 src WVPASS rm src/0 WVPASS bup index src WVPASS bup save -n src-2 src WVPASS bup rm --unsafe src-1 packs_before="$(ls "$BUP_DIR/objects/pack/"*.pack)" || exit $? WVPASS bup gc -v $GC_OPTS --threshold 99 2>&1 | tee gc.log packs_after="$(ls "$BUP_DIR/objects/pack/"*.pack)" || exit $? WVPASSEQ 0 "$(grep -cE '^rewriting ' gc.log)" WVPASSEQ "$packs_before" "$packs_after" WVPASS bup gc -v $GC_OPTS --threshold 1 2>&1 | tee gc.log packs_after="$(ls "$BUP_DIR/objects/pack/"*.pack)" || exit $? WVPASSEQ 1 "$(grep -cE '^rewriting ' gc.log)" # Check that only one pack was rewritten # Accommodate some systems that apparently used to change the default # ls sort order which must match LC_COLLATE for comm to work. packs_before="$(sort <(echo "$packs_before"))" || die $? packs_after="$(sort <(echo "$packs_after"))" || die $? only_in_before="$(comm -2 -3 <(echo "$packs_before") <(echo "$packs_after"))" \ || die $? only_in_after="$(comm -1 -3 <(echo "$packs_before") <(echo "$packs_after"))" \ || die $? in_both="$(comm -1 -2 <(echo "$packs_before") <(echo "$packs_after"))" || die $? WVPASSEQ 1 $(echo "$only_in_before" | wc -l) WVPASSEQ 1 $(echo "$only_in_after" | wc -l) WVPASSEQ 1 $(echo "$in_both" | wc -l) WVPASS rm -rf "$tmpdir" bup-0.29/t/test-import-duplicity.sh000077500000000000000000000031361303127641400173330ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail if ! [ "$(type -p duplicity)" != "" ]; then # FIXME: add WVSKIP. echo "Cannot find duplicity; skipping test)" 1>&2 exit 0 fi top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? bup() { "$top/bup" "$@"; } dup() { duplicity --archive-dir "$tmpdir/dup-cache" "$@"; } WVSTART "import-duplicity" WVPASS "$top/t/sync-tree" "$top/t/sampledata/" "$tmpdir/src/" export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" export PASSPHRASE=bup_duplicity_passphrase WVPASS bup init WVPASS cd "$tmpdir" WVPASS mkdir duplicity WVPASS dup src file://duplicity WVPASS bup tick WVPASS touch src/new-file WVPASS dup src file://duplicity WVPASS bup import-duplicity "file://duplicity" import-duplicity WVPASSEQ "$(bup ls import-duplicity/ | wc -l)" "3" WVPASSEQ "$(bup ls import-duplicity/latest/ | sort)" "$(ls src | sort)" WVPASS bup restore -C restore/ import-duplicity/latest/ WVFAIL "$top/t/compare-trees" src/ restore/ > tmp-compare-trees WVPASSEQ $(cat tmp-compare-trees | wc -l) 4 # Note: OS X rsync itemize output is currently only 9 chars, not 11. # Expect something like this (without the leading spaces): # .d..t...... ./ # .L..t...... abs-symlink -> /home/foo/bup/t/sampledata/var/abs-symlink-target # .L..t...... b -> a # .L..t...... c -> b expected_diff_rx='^\.d\.\.t.\.\.\.\.?\.? \./$|^\.L\.\.t.\.\.\.\.?\.? ' if ! grep -qE "$expected_diff_rx" tmp-compare-trees; then echo -n 'tmp-compare-trees: ' 1>&2 cat tmp-compare-trees 1>&2 fi WVPASS grep -qE "$expected_diff_rx" tmp-compare-trees WVPASS rm -rf "$tmpdir" bup-0.29/t/test-import-rdiff-backup.sh000077500000000000000000000015101303127641400176540ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } if ! [ "$(type -p rdiff-backup)" != "" ]; then # FIXME: add WVSKIP. echo "Cannot find rdiff-backup; skipping test)" 1>&2 exit 0 fi D=rdiff-backup.tmp WVSTART "import-rdiff-backup" WVPASS bup init WVPASS cd "$tmpdir" WVPASS mkdir rdiff-backup WVPASS rdiff-backup "$top/cmd" rdiff-backup WVPASS bup tick WVPASS rdiff-backup "$top/Documentation" rdiff-backup WVPASS bup import-rdiff-backup rdiff-backup import-rdiff-backup WVPASSEQ $(bup ls import-rdiff-backup/ | wc -l) 3 WVPASSEQ "$(bup ls -A import-rdiff-backup/latest/ | sort)" \ "$(ls -A "$top/Documentation" | sort)" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-index-check-device.sh000077500000000000000000000044311303127641400174330ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . ./t/lib.sh || exit $? set -o pipefail root_status="$(t/root-status)" || exit $? if [ "$root_status" != root ]; then echo 'Not root: skipping --check-device tests.' exit 0 # FIXME: add WVSKIP. fi if test -n "$(type -p modprobe)" && ! modprobe loop; then echo 'Unable to load loopback module; skipping --check-device test.' 1>&2 exit 0 fi if test -z "$(type -p losetup)"; then echo 'Unable to find losetup: skipping --check-device tests.' 1>&2 exit 0 # FIXME: add WVSKIP. fi if test -z "$(type -p mke2fs)"; then echo 'Unable to find mke2fs: skipping --check-device tests.' 1>&2 exit 0 # FIXME: add WVSKIP. fi WVSTART '--check-device' top="$(pwd)" tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } srcmnt="$(WVPASS wvmkmountpt)" || exit $? tmpmnt1="$(WVPASS wvmkmountpt)" || exit $? tmpmnt2="$(WVPASS wvmkmountpt)" || exit $? WVPASS cd "$tmpdir" WVPASS dd if=/dev/zero of=testfs.img bs=1M count=32 WVPASS mke2fs -F -j -m 0 testfs.img WVPASS mount -o loop testfs.img "$tmpmnt1" # Hide, so that tests can't create risks. WVPASS chown root:root "$tmpmnt1" WVPASS chmod 0700 "$tmpmnt1" # Create trivial content. WVPASS date > "$tmpmnt1/foo" WVPASS umount "$tmpmnt1" # Mount twice, so we'll have the same content with different devices. WVPASS mount -oro,loop testfs.img "$tmpmnt1" WVPASS mount -oro,loop testfs.img "$tmpmnt2" # Test default behavior: --check-device. WVPASS mount -oro --bind "$tmpmnt1" "$srcmnt" WVPASS bup init WVPASS bup index --fake-valid "$srcmnt" WVPASS umount "$srcmnt" WVPASS mount -oro --bind "$tmpmnt2" "$srcmnt" WVPASS bup index "$srcmnt" WVPASSEQ "$(bup index --status "$srcmnt")" \ "M $srcmnt/lost+found/ M $srcmnt/foo M $srcmnt/" WVPASS umount "$srcmnt" WVSTART '--no-check-device' WVPASS mount -oro --bind "$tmpmnt1" "$srcmnt" WVPASS bup index --clear WVPASS bup index --fake-valid "$srcmnt" WVPASS umount "$srcmnt" WVPASS mount -oro --bind "$tmpmnt2" "$srcmnt" WVPASS bup index --no-check-device "$srcmnt" WVPASS bup index --status "$srcmnt" WVPASSEQ "$(bup index --status "$srcmnt")" \ " $srcmnt/lost+found/ $srcmnt/foo $srcmnt/" WVPASS umount "$srcmnt" WVPASS umount "$tmpmnt1" WVPASS umount "$tmpmnt2" WVPASS rm -r "$tmpmnt1" "$tmpmnt2" "$tmpdir" bup-0.29/t/test-index-clear.sh000077500000000000000000000011171303127641400162050ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS bup init WVPASS cd "$tmpdir" WVSTART "index --clear" WVPASS mkdir src WVPASS touch src/foo src/bar WVPASS bup index -u src WVPASSEQ "$(bup index -p)" "src/foo src/bar src/ ./" WVPASS rm src/foo WVPASS bup index --clear WVPASS bup index -u src expected="$(WVPASS bup index -p)" || exit $? WVPASSEQ "$expected" "src/bar src/ ./" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-index.sh000077500000000000000000000051151303127641400151230ustar00rootroot00000000000000#!/usr/bin/env bash . wvtest.sh . wvtest-bup.sh . t/lib.sh set -o pipefail top="$(WVPASS /bin/pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS cd "$tmpdir" WVPASS bup init WVSTART "index" D=bupdata.tmp WVPASS force-delete $D WVPASS mkdir $D WVFAIL bup index --exclude-from $D/cannot-exist $D WVPASSEQ "$(bup index --check -p)" "" WVPASSEQ "$(bup index --check -p $D)" "" WVFAIL [ -e $D.fake ] WVFAIL bup index --check -u $D.fake WVPASS bup index --check -u $D WVPASSEQ "$(bup index --check -p $D)" "$D/" WVPASS touch $D/a WVPASS bup random 128k >$D/b WVPASS mkdir $D/d $D/d/e WVPASS bup random 512 >$D/f WVPASS ln -s non-existent-file $D/g WVPASSEQ "$(bup index -s $D/)" "A $D/" WVPASSEQ "$(bup index -s $D/b)" "" WVPASSEQ "$(bup index --check -us $D/b)" "A $D/b" WVPASSEQ "$(bup index --check -us $D/b $D/d)" \ "A $D/d/e/ A $D/d/ A $D/b" WVPASS touch $D/d/z WVPASS bup tick WVPASSEQ "$(bup index --check -usx $D)" \ "A $D/g A $D/f A $D/d/z A $D/d/e/ A $D/d/ A $D/b A $D/a A $D/" WVPASSEQ "$(bup index --check -us $D/a $D/b --fake-valid)" \ " $D/b $D/a" WVPASSEQ "$(bup index --check -us $D/a)" " $D/a" # stays unmodified WVPASSEQ "$(bup index --check -us $D/d --fake-valid)" \ " $D/d/z $D/d/e/ $D/d/" WVPASS touch $D/d/z WVPASS bup index -u $D/d/z # becomes modified WVPASSEQ "$(bup index -s $D/a $D $D/b)" \ "A $D/g A $D/f M $D/d/z $D/d/e/ M $D/d/ $D/b $D/a A $D/" WVPASS bup index -u $D/d/e $D/a --fake-invalid WVPASSEQ "$(cd $D && bup index -m .)" \ "./g ./f ./d/z ./d/e/ ./d/ ./a ./" WVPASSEQ "$(cd $D && bup index -m)" \ "g f d/z d/e/ d/ a ./" WVPASSEQ "$(cd $D && bup index -s .)" "$(cd $D && bup index -s .)" WVFAIL bup save -t $D/doesnt-exist-filename WVPASS mv "$BUP_DIR/bupindex" "$BUP_DIR/bi.old" WVFAIL bup save -t $D/d/e/fifotest WVPASS mkfifo $D/d/e/fifotest WVPASS bup index -u $D/d/e/fifotest WVPASS bup save -t $D/d/e/fifotest WVPASS bup save -t $D/d/e WVPASS rm -f $D/d/e/fifotest WVPASS bup index -u $D/d/e WVFAIL bup save -t $D/d/e/fifotest WVPASS mv "$BUP_DIR/bi.old" "$BUP_DIR/bupindex" WVPASS bup index -u $D/d/e WVPASS bup save -t $D/d/e WVPASSEQ "$(cd $D && bup index -m)" \ "g f d/z d/ a ./" WVPASS bup save -t $D/d WVPASS bup index --fake-invalid $D/d/z WVPASS bup save -t $D/d/z WVPASS bup save -t $D/d/z # test regenerating trees when no files are changed WVPASS bup save -t $D/d WVPASSEQ "$(cd $D && bup index -m)" \ "g f a ./" WVPASS bup save -r ":$BUP_DIR" -n r-test $D WVFAIL bup save -r ":$BUP_DIR/fake/path" -n r-test $D WVFAIL bup save -r ":$BUP_DIR" -n r-test $D/fake/path WVPASS rm -rf "$tmpdir" bup-0.29/t/test-list-idx.sh000077500000000000000000000014201303127641400155440ustar00rootroot00000000000000#!/usr/bin/env bash . wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail TOP="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$TOP/bup" "$@" } WVSTART 'bup list-idx' WVPASS bup init WVPASS cd "$tmpdir" WVPASS mkdir src WVPASS bup random 1k > src/data WVPASS bup index src WVPASS bup save -n src src WVPASS bup list-idx "$BUP_DIR"/objects/pack/*.idx hash1="$(WVPASS bup list-idx "$BUP_DIR"/objects/pack/*.idx)" || exit $? hash1="${hash1##* }" WVPASS bup list-idx --find "${hash1}" "$BUP_DIR"/objects/pack/*.idx \ > list-idx.log || exit $? found="$(cat list-idx.log)" || exit $? found="${found##* }" WVPASSEQ "$found" "$hash1" WVPASSEQ "$(wc -l < list-idx.log | tr -d ' ')" 1 WVPASS rm -r "$tmpdir" bup-0.29/t/test-ls.sh000077500000000000000000000153661303127641400144430ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } export TZ=UTC WVPASS bup init WVPASS cd "$tmpdir" WVPASS mkdir src WVPASS touch src/.dotfile src/executable WVPASS mkfifo src/fifo WVPASS "$top"/t/mksock src/socket WVPASS bup random 1k > src/file WVPASS chmod u+x src/executable WVPASS chmod -R u=rwX,g-rwx,o-rwx . WVPASS touch -t 200910032348 src/.dotfile src/* (WVPASS cd src; WVPASS ln -s file symlink) || exit $? WVPASS touch -t 200910032348 src WVPASS touch -t 200910032348 . WVPASS bup index src WVPASS bup save -n src -d 242312160 src WVPASS bup tag some-tag src WVSTART "ls (short)" (export BUP_FORCE_TTY=1; WVPASSEQ "$(WVPASS bup ls | tr -d ' ')" src) WVPASSEQ "$(WVPASS bup ls /)" "src" WVPASSEQ "$(WVPASS bup ls -A /)" ".commit .tag src" WVPASSEQ "$(WVPASS bup ls -AF /)" ".commit/ .tag/ src/" WVPASSEQ "$(WVPASS bup ls -a /)" ". .. .commit .tag src" WVPASSEQ "$(WVPASS bup ls -aF /)" "./ ../ .commit/ .tag/ src/" WVPASSEQ "$(WVPASS bup ls /.tag)" "some-tag" WVPASSEQ "$(WVPASS bup ls /src)" \ "1977-09-05-125600 latest" WVPASSEQ "$(WVPASS bup ls src/latest/"$tmpdir"/src)" "executable fifo file socket symlink" WVPASSEQ "$(WVPASS bup ls -A src/latest/"$tmpdir"/src)" ".dotfile executable fifo file socket symlink" WVPASSEQ "$(WVPASS bup ls -a src/latest/"$tmpdir"/src)" ". .. .dotfile executable fifo file socket symlink" WVPASSEQ "$(WVPASS bup ls -F src/latest/"$tmpdir"/src)" "executable* fifo| file socket= symlink@" WVPASSEQ "$(WVPASS bup ls --file-type src/latest/"$tmpdir"/src)" "executable fifo| file socket= symlink@" WVPASSEQ "$(WVPASS bup ls -d src/latest/"$tmpdir"/src)" "src/latest$tmpdir/src" WVSTART "ls (long)" WVPASSEQ "$(WVPASS bup ls -l / | tr -s ' ' ' ')" \ "d--------- ?/? 0 1970-01-01 00:00 src" WVPASSEQ "$(WVPASS bup ls -lA / | tr -s ' ' ' ')" \ "d--------- ?/? 0 1970-01-01 00:00 .commit d--------- ?/? 0 1970-01-01 00:00 .tag d--------- ?/? 0 1970-01-01 00:00 src" WVPASSEQ "$(WVPASS bup ls -lAF / | tr -s ' ' ' ')" \ "d--------- ?/? 0 1970-01-01 00:00 .commit/ d--------- ?/? 0 1970-01-01 00:00 .tag/ d--------- ?/? 0 1970-01-01 00:00 src/" WVPASSEQ "$(WVPASS bup ls -la / | tr -s ' ' ' ')" \ "d--------- ?/? 0 1970-01-01 00:00 . d--------- ?/? 0 1970-01-01 00:00 .. d--------- ?/? 0 1970-01-01 00:00 .commit d--------- ?/? 0 1970-01-01 00:00 .tag d--------- ?/? 0 1970-01-01 00:00 src" WVPASSEQ "$(WVPASS bup ls -laF / | tr -s ' ' ' ')" \ "d--------- ?/? 0 1970-01-01 00:00 ./ d--------- ?/? 0 1970-01-01 00:00 ../ d--------- ?/? 0 1970-01-01 00:00 .commit/ d--------- ?/? 0 1970-01-01 00:00 .tag/ d--------- ?/? 0 1970-01-01 00:00 src/" symlink_mode="$(WVPASS ls -l src/symlink | cut -b -10)" || exit $? symlink_bup_info="$(WVPASS bup ls -l src/latest"$tmpdir"/src | grep symlink)" \ || exit $? symlink_date="$(WVPASS echo "$symlink_bup_info" \ | WVPASS perl -ne 'm/.*? (\d+) (\d\d\d\d-\d\d-\d\d \d\d:\d\d)/ and print $2')" \ || exit $? if test "$(uname -s)" != NetBSD; then symlink_size="$(WVPASS bup-python -c "import os print os.lstat('src/symlink').st_size")" || exit $? else # NetBSD appears to return varying sizes, so for now, just ignore it. symlink_size="$(WVPASS echo "$symlink_bup_info" \ | WVPASS perl -ne 'm/.*? (\d+) (\d\d\d\d-\d\d-\d\d \d\d:\d\d)/ and print $1')" \ || exit $? fi uid="$(WVPASS id -u)" || exit $? gid="$(WVPASS bup-python -c 'import os; print os.stat("src").st_gid')" || exit $? user="$(WVPASS id -un)" || exit $? group="$(WVPASS bup-python -c 'import grp, os; print grp.getgrgid(os.stat("src").st_gid)[0]')" || exit $? WVPASSEQ "$(bup ls -l src/latest"$tmpdir"/src | tr -s ' ' ' ')" \ "-rwx------ $user/$group 0 2009-10-03 23:48 executable prw------- $user/$group 0 2009-10-03 23:48 fifo -rw------- $user/$group 1024 2009-10-03 23:48 file srwx------ $user/$group 0 2009-10-03 23:48 socket $symlink_mode $user/$group $symlink_size $symlink_date symlink -> file" WVPASSEQ "$(bup ls -la src/latest"$tmpdir"/src | tr -s ' ' ' ')" \ "drwx------ $user/$group 0 2009-10-03 23:48 . drwx------ $user/$group 0 2009-10-03 23:48 .. -rw------- $user/$group 0 2009-10-03 23:48 .dotfile -rwx------ $user/$group 0 2009-10-03 23:48 executable prw------- $user/$group 0 2009-10-03 23:48 fifo -rw------- $user/$group 1024 2009-10-03 23:48 file srwx------ $user/$group 0 2009-10-03 23:48 socket $symlink_mode $user/$group $symlink_size $symlink_date symlink -> file" WVPASSEQ "$(bup ls -lA src/latest"$tmpdir"/src | tr -s ' ' ' ')" \ "-rw------- $user/$group 0 2009-10-03 23:48 .dotfile -rwx------ $user/$group 0 2009-10-03 23:48 executable prw------- $user/$group 0 2009-10-03 23:48 fifo -rw------- $user/$group 1024 2009-10-03 23:48 file srwx------ $user/$group 0 2009-10-03 23:48 socket $symlink_mode $user/$group $symlink_size $symlink_date symlink -> file" WVPASSEQ "$(bup ls -lF src/latest"$tmpdir"/src | tr -s ' ' ' ')" \ "-rwx------ $user/$group 0 2009-10-03 23:48 executable* prw------- $user/$group 0 2009-10-03 23:48 fifo| -rw------- $user/$group 1024 2009-10-03 23:48 file srwx------ $user/$group 0 2009-10-03 23:48 socket= $symlink_mode $user/$group $symlink_size $symlink_date symlink@ -> file" WVPASSEQ "$(bup ls -l --file-type src/latest"$tmpdir"/src | tr -s ' ' ' ')" \ "-rwx------ $user/$group 0 2009-10-03 23:48 executable prw------- $user/$group 0 2009-10-03 23:48 fifo| -rw------- $user/$group 1024 2009-10-03 23:48 file srwx------ $user/$group 0 2009-10-03 23:48 socket= $symlink_mode $user/$group $symlink_size $symlink_date symlink@ -> file" WVPASSEQ "$(bup ls -ln src/latest"$tmpdir"/src | tr -s ' ' ' ')" \ "-rwx------ $uid/$gid 0 2009-10-03 23:48 executable prw------- $uid/$gid 0 2009-10-03 23:48 fifo -rw------- $uid/$gid 1024 2009-10-03 23:48 file srwx------ $uid/$gid 0 2009-10-03 23:48 socket $symlink_mode $uid/$gid $symlink_size $symlink_date symlink -> file" WVPASSEQ "$(bup ls -ld "src/latest$tmpdir/src" | tr -s ' ' ' ')" \ "drwx------ $user/$group 0 2009-10-03 23:48 src/latest$tmpdir/src" WVSTART "ls (backup set - long)" WVPASSEQ "$(bup ls -l src | cut -d' ' -f 1-2)" \ "l--------- ?/? l--------- ?/?" WVSTART "ls (dates TZ != UTC)" export TZ=America/Chicago symlink_date_central="$(bup ls -l src/latest"$tmpdir"/src | grep symlink)" symlink_date_central="$(echo "$symlink_date_central" \ | perl -ne 'm/.*? (\d+) (\d\d\d\d-\d\d-\d\d \d\d:\d\d)/ and print $2')" WVPASSEQ "$(bup ls -ln src/latest"$tmpdir"/src | tr -s ' ' ' ')" \ "-rwx------ $uid/$gid 0 2009-10-03 18:48 executable prw------- $uid/$gid 0 2009-10-03 18:48 fifo -rw------- $uid/$gid 1024 2009-10-03 18:48 file srwx------ $uid/$gid 0 2009-10-03 18:48 socket $symlink_mode $uid/$gid $symlink_size $symlink_date_central symlink -> file" unset TZ WVPASS rm -rf "$tmpdir" bup-0.29/t/test-main.sh000077500000000000000000000004411303127641400147350ustar00rootroot00000000000000#!/usr/bin/env bash . wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail TOP="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$TOP/bup" "$@" } WVSTART 'main' bup rc=$? WVPASSEQ "$rc" 99 WVPASS rm -r "$tmpdir" bup-0.29/t/test-meta.sh000077500000000000000000000675661303127641400147640ustar00rootroot00000000000000#!/usr/bin/env bash . wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail root_status="$(t/root-status)" || exit $? TOP="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" # Assume that mvmktempdir will always use the same dir. timestamp_resolutions="$(t/ns-timestamp-resolutions "$tmpdir/canary")" \ || exit $? atime_resolution="$(echo $timestamp_resolutions | WVPASS cut -d' ' -f 1)" \ || exit $? mtime_resolution="$(echo $timestamp_resolutions | WVPASS cut -d' ' -f 2)" \ || exit $? WVPASS rm "$tmpdir/canary" bup() { "$TOP/bup" "$@" } hardlink-sets() { "$TOP/t/hardlink-sets" "$@" } id-other-than() { "$TOP/t/id-other-than" "$@" } # Very simple metadata tests -- create a test tree then check that bup # meta can reproduce the metadata correctly (according to bup xstat) # via create, extract, start-extract, and finish-extract. The current # tests are crude, and this does not fully test devices, varying # users/groups, acls, attrs, etc. genstat() { ( export PATH="$TOP:$PATH" # pick up bup # Skip atime (test elsewhere) to avoid the observer effect. WVPASS find . | WVPASS sort \ | WVPASS xargs bup xstat \ --mtime-resolution "$mtime_resolution"ns \ --exclude-fields ctime,atime,size ) } test-src-create-extract() { # Test bup meta create/extract for ./src -> ./src-restore. # Also writes to ./src-stat and ./src-restore-stat. ( (WVPASS cd src; WVPASS genstat) > src-stat || exit $? WVPASS bup meta --create --recurse --file src.meta src # Test extract. WVPASS force-delete src-restore WVPASS mkdir src-restore WVPASS cd src-restore WVPASS bup meta --extract --file ../src.meta WVPASS test -d src (WVPASS cd src; WVPASS genstat >../../src-restore-stat) || exit $? WVPASS diff -U5 ../src-stat ../src-restore-stat # Test start/finish extract. WVPASS force-delete src WVPASS bup meta --start-extract --file ../src.meta WVPASS test -d src WVPASS bup meta --finish-extract --file ../src.meta (WVPASS cd src; WVPASS genstat >../../src-restore-stat) || exit $? WVPASS diff -U5 ../src-stat ../src-restore-stat ) } test-src-save-restore() { # Test bup save/restore metadata for ./src -> ./src-restore. Also # writes to BUP_DIR. Note that for now this just tests the # restore below src/, in order to avoid having to worry about # operations that require root (like chown /home). ( WVPASS rm -rf "$BUP_DIR" WVPASS bup init WVPASS bup index src WVPASS bup save -t -n src src # Test extract. WVPASS force-delete src-restore WVPASS mkdir src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/" WVPASS test -d src-restore/src WVPASS "$TOP/t/compare-trees" -c src/ src-restore/src/ WVPASS rm -rf src.bup ) } setup-test-tree() { WVPASS "$TOP/t/sync-tree" "$TOP/t/sampledata/" "$tmpdir/src/" # Add some hard links for the general tests. ( WVPASS cd "$tmpdir"/src WVPASS touch hardlink-target WVPASS ln hardlink-target hardlink-1 WVPASS ln hardlink-target hardlink-2 WVPASS ln hardlink-target hardlink-3 ) || exit $? # Add some trivial files for the index, modify, save tests. ( WVPASS cd "$tmpdir"/src WVPASS mkdir volatile WVPASS touch volatile/{1,2,3} ) || exit $? # Regression test for metadata sort order. Previously, these two # entries would sort in the wrong order because the metadata # entries were being sorted by mangled name, but the index isn't. WVPASS dd if=/dev/zero of="$tmpdir"/src/foo bs=1k count=33 WVPASS touch -t 201111111111 "$tmpdir"/src/foo WVPASS touch -t 201112121111 "$tmpdir"/src/foo-bar t/mksock "$tmpdir"/src/test-socket || true } # Use the test tree to check bup meta. WVSTART 'meta --create/--extract' ( tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" WVPASS setup-test-tree WVPASS cd "$tmpdir" WVPASS test-src-create-extract # Test a top-level file (not dir). WVPASS touch src-file WVPASS bup meta -cf src-file.meta src-file WVPASS mkdir dest WVPASS cd dest WVPASS bup meta -xf ../src-file.meta WVPASS rm -r "$tmpdir" ) || exit $? # Use the test tree to check bup save/restore metadata. WVSTART 'metadata save/restore (general)' ( tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" WVPASS setup-test-tree WVPASS cd "$tmpdir" WVPASS test-src-save-restore # Test a deeper subdir/ to make sure top-level non-dir metadata is # restored correctly. We need at least one dir and one non-dir at # the "top-level". WVPASS test -d src/var/cmd WVPASS test -f src/var/cmd/save-cmd.py WVPASS rm -rf "$BUP_DIR" WVPASS bup init WVPASS touch -t 201111111111 src-restore # Make sure the top won't match. WVPASS bup index src WVPASS bup save -t -n src src WVPASS force-delete src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/src/var/." WVPASS touch -t 201211111111 src-restore # Make sure the top won't match. # Check that the only difference is the top dir. WVFAIL $TOP/t/compare-trees -c src/var/ src-restore/ > tmp-compare-trees WVPASSEQ $(cat tmp-compare-trees | wc -l) 1 # The number of rsync status characters varies, so accept any # number of trailing dots. For example OS X native rsync produces # 9, but Homebrew's produces 12, while on other platforms, 11 is # common. expected_diff_rx='^\.d\.\.t\.\.\.(\.)+ \./$' if ! grep -qE "$expected_diff_rx" tmp-compare-trees; then echo -n 'tmp-compare-trees: ' 1>&2 cat tmp-compare-trees 1>&2 fi WVPASS grep -qE "$expected_diff_rx" tmp-compare-trees WVPASS rm -r "$tmpdir" ) || exit $? # Test that we pull the index (not filesystem) metadata for any # unchanged files whenever we're saving other files in a given # directory. WVSTART 'metadata save/restore (using index metadata)' ( tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" WVPASS setup-test-tree WVPASS cd "$tmpdir" # ...for now -- might be a problem with hardlink restores that was # causing noise wrt this test. WVPASS rm -rf src/hardlink* # Pause here to keep the filesystem changes far enough away from # the first index run that bup won't cap their index timestamps # (see "bup help index" for more information). Without this # sleep, the compare-trees test below "Bup should *not* pick up # these metadata..." may fail. WVPASS sleep 1 WVPASS rm -rf "$BUP_DIR" WVPASS bup init WVPASS bup index src WVPASS bup save -t -n src src WVPASS force-delete src-restore-1 WVPASS mkdir src-restore-1 WVPASS bup restore -C src-restore-1 "/src/latest$(pwd)/" WVPASS test -d src-restore-1/src WVPASS "$TOP/t/compare-trees" -c src/ src-restore-1/src/ WVPASS echo "blarg" > src/volatile/1 WVPASS cp -pP src/volatile/1 src-restore-1/src/volatile/ WVPASS bup index src # Bup should *not* pick up these metadata changes. WVPASS touch src/volatile/2 WVPASS bup save -t -n src src WVPASS force-delete src-restore-2 WVPASS mkdir src-restore-2 WVPASS bup restore -C src-restore-2 "/src/latest$(pwd)/" WVPASS test -d src-restore-2/src WVPASS "$TOP/t/compare-trees" -c src-restore-1/src/ src-restore-2/src/ WVPASS rm -r "$tmpdir" ) || exit $? setup-hardlink-test() { WVPASS rm -rf "$tmpdir/src" "$BUP_DIR" WVPASS bup init WVPASS mkdir "$tmpdir/src" } hardlink-test-run-restore() { WVPASS force-delete src-restore WVPASS mkdir src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/" WVPASS test -d src-restore/src } # Test hardlinks more carefully. WVSTART 'metadata save/restore (hardlinks)' ( tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" WVPASS setup-hardlink-test WVPASS cd "$tmpdir" # Test trivial case - single hardlink. ( WVPASS cd src WVPASS touch hardlink-target WVPASS ln hardlink-target hardlink-1 ) || exit $? WVPASS bup index src WVPASS bup save -t -n src src WVPASS hardlink-test-run-restore WVPASS "$TOP/t/compare-trees" -c src/ src-restore/src/ # Test the case where the hardlink hasn't changed, but the tree # needs to be saved again. i.e. the save-cmd.py "if hashvalid:" # case. ( WVPASS cd src WVPASS echo whatever > something-new ) || exit $? WVPASS bup index src WVPASS bup save -t -n src src WVPASS hardlink-test-run-restore WVPASS "$TOP/t/compare-trees" -c src/ src-restore/src/ # Test hardlink changes between index runs. # WVPASS setup-hardlink-test WVPASS cd src WVPASS touch hardlink-target-a WVPASS touch hardlink-target-b WVPASS ln hardlink-target-a hardlink-b-1 WVPASS ln hardlink-target-a hardlink-a-1 WVPASS cd .. WVPASS bup index -vv src WVPASS rm src/hardlink-b-1 WVPASS ln src/hardlink-target-b src/hardlink-b-1 WVPASS bup index -vv src WVPASS bup save -t -n src src WVPASS hardlink-test-run-restore WVPASS echo ./src/hardlink-a-1 > hardlink-sets.expected WVPASS echo ./src/hardlink-target-a >> hardlink-sets.expected WVPASS echo >> hardlink-sets.expected WVPASS echo ./src/hardlink-b-1 >> hardlink-sets.expected WVPASS echo ./src/hardlink-target-b >> hardlink-sets.expected (WVPASS cd src-restore; WVPASS hardlink-sets .) > hardlink-sets.restored \ || exit $? WVPASS diff -u hardlink-sets.expected hardlink-sets.restored # Test hardlink changes between index and save -- hardlink set [a # b c d] changes to [a b] [c d]. At least right now bup should # notice and recreate the latter. WVPASS setup-hardlink-test WVPASS cd "$tmpdir"/src WVPASS touch a WVPASS ln a b WVPASS ln a c WVPASS ln a d WVPASS cd .. WVPASS bup index -vv src WVPASS rm src/c src/d WVPASS touch src/c WVPASS ln src/c src/d WVPASS bup save -t -n src src WVPASS hardlink-test-run-restore WVPASS echo ./src/a > hardlink-sets.expected WVPASS echo ./src/b >> hardlink-sets.expected WVPASS echo >> hardlink-sets.expected WVPASS echo ./src/c >> hardlink-sets.expected WVPASS echo ./src/d >> hardlink-sets.expected (WVPASS cd src-restore; WVPASS hardlink-sets .) > hardlink-sets.restored \ || exit $? WVPASS diff -u hardlink-sets.expected hardlink-sets.restored # Test that we don't link outside restore tree. WVPASS setup-hardlink-test WVPASS cd "$tmpdir" WVPASS mkdir src/a src/b WVPASS touch src/a/1 WVPASS ln src/a/1 src/b/1 WVPASS bup index -vv src WVPASS bup save -t -n src src WVPASS force-delete src-restore WVPASS mkdir src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/src/a/" WVPASS test -e src-restore/1 WVPASS echo -n > hardlink-sets.expected (WVPASS cd src-restore; WVPASS hardlink-sets .) > hardlink-sets.restored \ || exit $? WVPASS diff -u hardlink-sets.expected hardlink-sets.restored # Test that we do link within separate sub-trees. WVPASS setup-hardlink-test WVPASS cd "$tmpdir" WVPASS mkdir src/a src/b WVPASS touch src/a/1 WVPASS ln src/a/1 src/b/1 WVPASS bup index -vv src/a src/b WVPASS bup save -t -n src src/a src/b WVPASS hardlink-test-run-restore WVPASS echo ./src/a/1 > hardlink-sets.expected WVPASS echo ./src/b/1 >> hardlink-sets.expected (WVPASS cd src-restore; WVPASS hardlink-sets .) > hardlink-sets.restored \ || exit $? WVPASS diff -u hardlink-sets.expected hardlink-sets.restored WVPASS rm -r "$tmpdir" ) || exit $? WVSTART 'meta --edit' ( tmpdir="$(WVPASS wvmktempdir)" || exit $? WVPASS cd "$tmpdir" WVPASS mkdir src WVPASS bup meta -cf src.meta src WVPASS bup meta --edit --set-uid 0 src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^uid: 0' WVPASS bup meta --edit --set-uid 1000 src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^uid: 1000' WVPASS bup meta --edit --set-gid 0 src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^gid: 0' WVPASS bup meta --edit --set-gid 1000 src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^gid: 1000' WVPASS bup meta --edit --set-user foo src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^user: foo' WVPASS bup meta --edit --set-user bar src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^user: bar' WVPASS bup meta --edit --unset-user src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^user:' WVPASS bup meta --edit --set-user bar --unset-user src.meta \ | WVPASS bup meta -tvvf - | WVPASS grep -qE '^user:' WVPASS bup meta --edit --unset-user --set-user bar src.meta \ | WVPASS bup meta -tvvf - | WVPASS grep -qE '^user: bar' WVPASS bup meta --edit --set-group foo src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^group: foo' WVPASS bup meta --edit --set-group bar src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^group: bar' WVPASS bup meta --edit --unset-group src.meta | WVPASS bup meta -tvvf - \ | WVPASS grep -qE '^group:' WVPASS bup meta --edit --set-group bar --unset-group src.meta \ | WVPASS bup meta -tvvf - | WVPASS grep -qE '^group:' WVPASS bup meta --edit --unset-group --set-group bar src.meta \ | WVPASS bup meta -tvvf - | grep -qE '^group: bar' WVPASS rm -r "$tmpdir" ) || exit $? WVSTART 'meta --no-recurse' ( tmpdir="$(WVPASS wvmktempdir)" || exit $? WVPASS cd "$tmpdir" WVPASS mkdir src WVPASS mkdir src/foo WVPASS touch src/foo/{1,2,3} WVPASS bup meta -cf src.meta src WVPASSEQ "$(bup meta -tf src.meta | LC_ALL=C sort)" "src/ src/foo/ src/foo/1 src/foo/2 src/foo/3" WVPASS bup meta --no-recurse -cf src.meta src WVPASSEQ "$(bup meta -tf src.meta | LC_ALL=C sort)" "src/" WVPASS rm -r "$tmpdir" ) || exit $? # Test ownership restoration (when not root or fakeroot). ( if [ "$root_status" != none ]; then exit 0 fi tmpdir="$(WVPASS wvmktempdir)" || exit $? first_group="$(WVPASS bup-python -c 'import os,grp; \ print grp.getgrgid(os.getgroups()[0])[0]')" || exit $? last_group="$(bup-python -c 'import os,grp; \ print grp.getgrgid(os.getgroups()[-1])[0]')" || exit $? last_group_erx="$(escape-erx "$last_group")" WVSTART 'metadata (restoration of ownership)' WVPASS cd "$tmpdir" WVPASS touch src # Some systems always assign the parent dir group to new paths # (sgid). Make sure the group is one we're in. WVPASS chgrp -R "$first_group" src WVPASS bup meta -cf src.meta src WVPASS mkdir dest WVPASS cd dest # Make sure we don't change (or try to change) the user when not root. WVPASS bup meta --edit --set-user root ../src.meta | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qvE '^user: root' WVPASS rm -rf src WVPASS bup meta --edit --unset-user --set-uid 0 ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qvE '^user: root' # Make sure we can restore one of the user's groups. WVPASS rm -rf src WVPASS bup meta --edit --set-group "$last_group" ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qE "^group: $last_group_erx" # Make sure we can restore one of the user's gids. user_gids="$(id -G)" || exit $? last_gid="$(echo ${user_gids/* /})" || exit $? WVPASS rm -rf src WVPASS bup meta --edit --unset-group --set-gid "$last_gid" ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qE "^gid: $last_gid" # Test --numeric-ids (gid). WVPASS rm -rf src current_gidx=$(bup meta -tvvf ../src.meta | grep -e '^gid:') || exit $? WVPASS bup meta --edit --set-group "$last_group" ../src.meta \ | WVPASS bup meta -x --numeric-ids new_gidx=$(bup xstat src | grep -e '^gid:') || exit $? WVPASSEQ "$current_gidx" "$new_gidx" # Test that restoring an unknown user works. unknown_user=$("$TOP"/t/unknown-owner --user) || exit $? WVPASS rm -rf src current_uidx=$(bup meta -tvvf ../src.meta | grep -e '^uid:') || exit $? WVPASS bup meta --edit --set-user "$unknown_user" ../src.meta \ | WVPASS bup meta -x new_uidx=$(bup xstat src | grep -e '^uid:') || exit $? WVPASSEQ "$current_uidx" "$new_uidx" # Test that restoring an unknown group works. unknown_group=$("$TOP"/t/unknown-owner --group) || exit $? WVPASS rm -rf src current_gidx=$(bup meta -tvvf ../src.meta | grep -e '^gid:') || exit $? WVPASS bup meta --edit --set-group "$unknown_group" ../src.meta \ | WVPASS bup meta -x new_gidx=$(bup xstat src | grep -e '^gid:') || exit $? WVPASSEQ "$current_gidx" "$new_gidx" WVPASS rm -r "$tmpdir" ) || exit $? # Test ownership restoration (when root or fakeroot). ( if [ "$root_status" = none ]; then exit 0 fi tmpdir="$(WVPASS wvmktempdir)" || exit $? uid=$(WVPASS id -un) || exit $? gid=$(WVPASS id -gn) || exit $? WVSTART 'metadata (restoration of ownership as root)' WVPASS cd "$tmpdir" WVPASS touch src WVPASS chown "$uid:$gid" src # In case the parent dir is sgid, etc. WVPASS bup meta -cf src.meta src WVPASS mkdir dest WVPASS chmod 700 dest # so we can't accidentally do something insecure WVPASS cd dest other_uinfo="$(id-other-than --user "$uid")" || exit $? other_user="${other_uinfo%%:*}" other_uid="${other_uinfo##*:}" other_ginfo="$(id-other-than --group "$gid")" || exit $? other_group="${other_ginfo%%:*}" other_gid="${other_ginfo##*:}" # Make sure we can restore a uid (must be in /etc/passwd b/c cygwin). WVPASS bup meta --edit --unset-user --set-uid "$other_uid" ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qE "^uid: $other_uid" # Make sure we can restore a gid (must be in /etc/group b/c cygwin). WVPASS bup meta --edit --unset-group --set-gid "$other_gid" ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qE "^gid: $other_gid" other_uinfo2="$(id-other-than --user "$(id -un)" "$other_user")" || exit $? other_user2="${other_uinfo2%%:*}" other_user2_erx="$(escape-erx "$other_user2")" || exit $? other_uid2="${other_uinfo2##*:}" other_ginfo2="$(id-other-than --group "$(id -gn)" "$other_group")" || exit $? other_group2="${other_ginfo2%%:*}" other_group2_erx="$(escape-erx "$other_group2")" || exit $? other_gid2="${other_ginfo2##*:}" # Try to restore a user (and see that user trumps uid when uid is not 0). WVPASS bup meta --edit \ --set-uid "$other_uid" --set-user "$other_user2" ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qE "^user: $other_user2_erx" # Try to restore a group (and see that group trumps gid when gid is not 0). WVPASS bup meta --edit \ --set-gid "$other_gid" --set-group "$other_group2" ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qE "^group: $other_group2_erx" # Test --numeric-ids (uid). Note the name 'root' is not handled # specially, so we use that here as the test user name. We assume # that the root user's uid is never 42. WVPASS rm -rf src WVPASS bup meta --edit --set-user root --set-uid "$other_uid" ../src.meta \ | WVPASS bup meta -x --numeric-ids new_uidx=$(bup xstat src | grep -e '^uid:') || exit $? WVPASSEQ "$new_uidx" "uid: $other_uid" # Test --numeric-ids (gid). Note the name 'root' is not handled # specially, so we use that here as the test group name. We # assume that the root group's gid is never 42. WVPASS rm -rf src WVPASS bup meta --edit --set-group root --set-gid "$other_gid" ../src.meta \ | WVPASS bup meta -x --numeric-ids new_gidx=$(bup xstat src | grep -e '^gid:') || exit $? WVPASSEQ "$new_gidx" "gid: $other_gid" # Test that restoring an unknown user works. unknown_user=$("$TOP"/t/unknown-owner --user) || exit $? WVPASS rm -rf src WVPASS bup meta --edit \ --set-uid "$other_uid" --set-user "$unknown_user" ../src.meta \ | WVPASS bup meta -x new_uidx=$(bup xstat src | grep -e '^uid:') || exit $? WVPASSEQ "$new_uidx" "uid: $other_uid" # Test that restoring an unknown group works. unknown_group=$("$TOP"/t/unknown-owner --group) || exit $? WVPASS rm -rf src WVPASS bup meta --edit \ --set-gid "$other_gid" --set-group "$unknown_group" ../src.meta \ | WVPASS bup meta -x new_gidx=$(bup xstat src | grep -e '^gid:') || exit $? WVPASSEQ "$new_gidx" "gid: $other_gid" if ! [[ $(uname) =~ CYGWIN ]]; then # For now, skip these on Cygwin because it doesn't allow # restoring an unknown uid/gid. # Make sure a uid of 0 trumps a non-root user. WVPASS bup meta --edit --set-user "$other_user2" ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qvE "^user: $other_user2_erx" WVPASS bup xstat src | WVPASS grep -qE "^uid: 0" # Make sure a gid of 0 trumps a non-root group. WVPASS bup meta --edit --set-group "$other_group2" ../src.meta \ | WVPASS bup meta -x WVPASS bup xstat src | WVPASS grep -qvE "^group: $other_group2_erx" WVPASS bup xstat src | WVPASS grep -qE "^gid: 0" fi WVPASS rm -r "$tmpdir" ) || exit $? # Root-only tests that require an FS with all the trimmings: ACLs, # Linux attr, Linux xattr, etc. if [ "$root_status" = root ]; then ( # Some cleanup handled in universal-cleanup() above. # These tests are only likely to work under Linux for now # (patches welcome). [[ $(uname) =~ Linux ]] || exit 0 if ! modprobe loop; then echo 'Unable to load loopback module; skipping dependent tests.' 1>&2 exit 0 fi testfs="$(WVPASS wvmkmountpt)" || exit $? testfs_limited="$(WVPASS wvmkmountpt)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" WVSTART 'meta - general (as root)' WVPASS setup-test-tree WVPASS cd "$tmpdir" umount "$testfs" WVPASS dd if=/dev/zero of=testfs.img bs=1M count=32 # Make sure we have all the options the chattr test needs # (i.e. create a "normal" ext4 filesystem). WVPASS mke2fs -F -m 0 \ -I 256 \ -O has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize \ testfs.img WVPASS mount -o loop,acl,user_xattr testfs.img "$testfs" # Hide, so that tests can't create risks. WVPASS chown root:root "$testfs" WVPASS chmod 0700 "$testfs" umount "$testfs_limited" WVPASS dd if=/dev/zero of=testfs-limited.img bs=1M count=32 WVPASS mkfs -t vfat testfs-limited.img WVPASS mount -o loop,uid=root,gid=root,umask=0077 \ testfs-limited.img "$testfs_limited" WVPASS cp -pPR src "$testfs"/src (WVPASS cd "$testfs"; WVPASS test-src-create-extract) || exit $? WVSTART 'meta - atime (as root)' WVPASS force-delete "$testfs"/src WVPASS mkdir "$testfs"/src ( WVPASS mkdir "$testfs"/src/foo WVPASS touch "$testfs"/src/bar PYTHONPATH="$TOP/lib" \ WVPASS bup-python -c "from bup import xstat; \ x = xstat.timespec_to_nsecs((42, 0));\ xstat.utime('$testfs/src/foo', (x, x));\ xstat.utime('$testfs/src/bar', (x, x));" WVPASS cd "$testfs" WVPASS bup meta -v --create --recurse --file src.meta src WVPASS bup meta -tvf src.meta # Test extract. WVPASS force-delete src-restore WVPASS mkdir src-restore WVPASS cd src-restore WVPASS bup meta --extract --file ../src.meta WVPASSEQ "$(bup xstat --include-fields=atime src/foo)" "atime: 42" WVPASSEQ "$(bup xstat --include-fields=atime src/bar)" "atime: 42" # Test start/finish extract. WVPASS force-delete src WVPASS bup meta --start-extract --file ../src.meta WVPASS test -d src WVPASS bup meta --finish-extract --file ../src.meta WVPASSEQ "$(bup xstat --include-fields=atime src/foo)" "atime: 42" WVPASSEQ "$(bup xstat --include-fields=atime src/bar)" "atime: 42" ) || exit $? WVSTART 'meta - Linux attr (as root)' WVPASS force-delete "$testfs"/src WVPASS mkdir "$testfs"/src ( WVPASS touch "$testfs"/src/foo WVPASS mkdir "$testfs"/src/bar WVPASS chattr +acdeijstuADST "$testfs"/src/foo WVPASS chattr +acdeijstuADST "$testfs"/src/bar (WVPASS cd "$testfs"; WVPASS test-src-create-extract) || exit $? # Test restoration to a limited filesystem (vfat). ( WVPASS bup meta --create --recurse --file "$testfs"/src.meta \ "$testfs"/src WVPASS force-delete "$testfs_limited"/src-restore WVPASS mkdir "$testfs_limited"/src-restore WVPASS cd "$testfs_limited"/src-restore WVFAIL bup meta --extract --file "$testfs"/src.meta 2>&1 \ | WVPASS grep -e '^Linux chattr:' \ | WVPASS bup-python -c \ 'import sys; exit(not len(sys.stdin.readlines()) == 3)' ) || exit $? ) || exit $? WVSTART 'meta - Linux xattr (as root)' WVPASS force-delete "$testfs"/src WVPASS mkdir "$testfs"/src WVPASS touch "$testfs"/src/foo WVPASS mkdir "$testfs"/src/bar WVPASS attr -s foo -V bar "$testfs"/src/foo WVPASS attr -s foo -V bar "$testfs"/src/bar (WVPASS cd "$testfs"; WVPASS test-src-create-extract) || exit $? # Test restoration to a limited filesystem (vfat). ( WVPASS bup meta --create --recurse --file "$testfs"/src.meta \ "$testfs"/src WVPASS force-delete "$testfs_limited"/src-restore WVPASS mkdir "$testfs_limited"/src-restore WVPASS cd "$testfs_limited"/src-restore WVFAIL bup meta --extract --file "$testfs"/src.meta WVFAIL bup meta --extract --file "$testfs"/src.meta 2>&1 \ | WVPASS grep -e "^xattr\.set '" \ | WVPASS bup-python -c \ 'import sys; exit(not len(sys.stdin.readlines()) == 2)' ) || exit $? WVSTART 'meta - POSIX.1e ACLs (as root)' WVPASS force-delete "$testfs"/src WVPASS mkdir "$testfs"/src WVPASS touch "$testfs"/src/foo WVPASS mkdir "$testfs"/src/bar WVPASS setfacl -m u:root:r "$testfs"/src/foo WVPASS setfacl -m u:root:r "$testfs"/src/bar (WVPASS cd "$testfs"; WVPASS test-src-create-extract) || exit $? # Test restoration to a limited filesystem (vfat). ( WVPASS bup meta --create --recurse --file "$testfs"/src.meta \ "$testfs"/src WVPASS force-delete "$testfs_limited"/src-restore WVPASS mkdir "$testfs_limited"/src-restore WVPASS cd "$testfs_limited"/src-restore WVFAIL bup meta --extract --file "$testfs"/src.meta 2>&1 \ | WVPASS grep -e '^POSIX1e ACL applyto:' \ | WVPASS bup-python -c \ 'import sys; exit(not len(sys.stdin.readlines()) == 2)' ) || exit $? WVPASS umount "$testfs" WVPASS umount "$testfs_limited" WVPASS rm -r "$testfs" "$testfs_limited" WVPASS rm -r "$tmpdir" ) || exit $? fi WVPASS rm -r "$tmpdir" bup-0.29/t/test-on.sh000077500000000000000000000023761303127641400144360ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . ./t/lib.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } compare-trees() { "$top/t/compare-trees" "$@"; } WVPASS bup init WVPASS cd "$tmpdir" WVSTART "index/save" WVPASS mkdir src src/foo WVPASS date > src/bar WVPASS bup random 1k > src/baz WVPASS bup on - index src WVPASS bup on - save -ctn src src > get.log WVPASSEQ $(WVPASS cat get.log | WVPASS wc -l) 2 tree_id=$(WVPASS awk 'FNR == 1' get.log) || exit $? commit_id=$(WVPASS awk 'FNR == 2' get.log) || exit $? WVPASS git ls-tree "$tree_id" WVPASS git cat-file commit "$commit_id" | head -n 1 \ | WVPASS grep "^tree $tree_id\$" WVPASS bup restore -C restore "src/latest/$(pwd)/src/." WVPASS compare-trees src/ restore/ WVPASS rm -r restore WVSTART "split" WVPASS bup on - split -ctn baz src/baz > get.log tree_id=$(WVPASS awk 'FNR == 1' get.log) || exit $? commit_id=$(WVPASS awk 'FNR == 2' get.log) || exit $? WVPASS git ls-tree "$tree_id" WVPASS git cat-file commit "$commit_id" | head -n 1 \ | WVPASS grep "^tree $tree_id\$" WVPASS bup join baz > restore-baz WVPASS cmp src/baz restore-baz WVPASS rm -rf "$tmpdir" bup-0.29/t/test-prune-older000077500000000000000000000220541303127641400156400ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble from __future__ import print_function from collections import defaultdict from difflib import unified_diff from itertools import chain, dropwhile, groupby, takewhile from os import environ, chdir from os.path import abspath, dirname from pipes import quote from random import choice, randint from shutil import copytree, rmtree from subprocess import PIPE, Popen, check_call from sys import stderr from time import localtime, strftime, time import os, random, sys script_home = abspath(dirname(sys.argv[0] or '.')) sys.path[:0] = [abspath(script_home + '/../lib'), abspath(script_home + '/..')] top = os.getcwd() bup_cmd = top + '/bup' from buptest import test_tempdir from wvtest import wvfail, wvpass, wvpasseq, wvpassne, wvstart from bup.helpers import partition, period_as_secs, readpipe def logcmd(cmd): if isinstance(cmd, basestring): print(cmd, file=stderr) else: print(' '.join(map(quote, cmd)), file=stderr) def exc(cmd, shell=False): logcmd(cmd) check_call(cmd, shell=shell) def exo(cmd, stdin=None, stdout=True, stderr=False, shell=False, check=True): logcmd(cmd) p = Popen(cmd, stdin=None, stdout=(PIPE if stdout else None), stderr=PIPE, shell=shell) out, err = p.communicate() if check and p.returncode != 0: raise Exception('subprocess %r failed with status %d, stderr: %r' % (' '.join(argv), p.returncode, err)) return out, err, p def bup(*args): return exo((bup_cmd,) + args)[0] def bupc(*args): return exc((bup_cmd,) + args) def create_older_random_saves(n, start_utc, end_utc): with open('foo', 'w') as f: pass exc(['git', 'add', 'foo']) utcs = set() while len(utcs) != n: utcs.add(randint(start_utc, end_utc)) utcs = sorted(utcs) for utc in utcs: with open('foo', 'w') as f: f.write(str(utc) + '\n') exc(['git', 'commit', '--date', str(utc), '-qam', str(utc)]) exc(['git', 'gc', '--aggressive']) return utcs # There is corresponding code in bup for some of this, but the # computation method is different here, in part so that the test can # provide a more effective cross-check. period_kinds = ['all', 'dailies', 'monthlies', 'yearlies'] period_scale = {'s': 1, 'min': 60, 'h': 60 * 60, 'd': 60 * 60 * 24, 'w': 60 * 60 * 24 * 7, 'm': 60 * 60 * 24 * 31, 'y': 60 * 60 * 24 * 366} period_scale_kinds = period_scale.keys() def expected_retentions(utcs, utc_start, spec): if not spec: return utcs utcs = sorted(utcs, reverse=True) period_start = dict(spec) for kind, duration in period_start.iteritems(): period_start[kind] = utc_start - period_as_secs(duration) period_start = defaultdict(lambda: float('inf'), period_start) all = list(takewhile(lambda x: x >= period_start['all'], utcs)) utcs = list(dropwhile(lambda x: x >= period_start['all'], utcs)) matches = takewhile(lambda x: x >= period_start['dailies'], utcs) dailies = [min(day_utcs) for yday, day_utcs in groupby(matches, lambda x: localtime(x).tm_yday)] utcs = list(dropwhile(lambda x: x >= period_start['dailies'], utcs)) matches = takewhile(lambda x: x >= period_start['monthlies'], utcs) monthlies = [min(month_utcs) for month, month_utcs in groupby(matches, lambda x: localtime(x).tm_mon)] utcs = dropwhile(lambda x: x >= period_start['monthlies'], utcs) matches = takewhile(lambda x: x >= period_start['yearlies'], utcs) yearlies = [min(year_utcs) for year, year_utcs in groupby(matches, lambda x: localtime(x).tm_year)] return chain(all, dailies, monthlies, yearlies) def period_spec(start_utc, end_utc): global period_kinds, period_scale, period_scale_kinds result = [] desired_specs = randint(1, 2 * len(period_kinds)) assert(desired_specs >= 1) # At least one --keep argument is required while len(result) < desired_specs: period = None if randint(1, 100) <= 5: period = 'forever' else: assert(end_utc > start_utc) period_secs = randint(1, end_utc - start_utc) scale = choice(period_scale_kinds) mag = int(float(period_secs) / period_scale[scale]) if mag != 0: period = str(mag) + scale if period: result += [(choice(period_kinds), period)] return tuple(result) def unique_period_specs(n, start_utc, end_utc): invocations = set() while len(invocations) < n: invocations.add(period_spec(start_utc, end_utc)) return tuple(invocations) def period_spec_to_period_args(spec): return tuple(chain(*(('--keep-' + kind + '-for', period) for kind, period in spec))) def result_diffline(x): return str(x) + strftime(' %Y-%m-%d-%H%M%S', localtime(x)) + '\n' def check_prune_result(expected): actual = sorted([int(x) for x in exo(['git', 'log', '--pretty=format:%at'])[0].splitlines()]) if expected != actual: for x in expected: print('ex:', x, strftime('%Y-%m-%d-%H%M%S', localtime(x)), file=stderr) for line in unified_diff([result_diffline(x) for x in expected], [result_diffline(x) for x in actual], fromfile='expected', tofile='actual'): sys.stderr.write(line) wvpass(expected == actual) environ['GIT_AUTHOR_NAME'] = 'bup test' environ['GIT_COMMITTER_NAME'] = 'bup test' environ['GIT_AUTHOR_EMAIL'] = 'bup@a425bc70a02811e49bdf73ee56450e6f' environ['GIT_COMMITTER_EMAIL'] = 'bup@a425bc70a02811e49bdf73ee56450e6f' seed = int(environ.get('BUP_TEST_SEED', time())) random.seed(seed) print('random seed:', seed, file=stderr) save_population = int(environ.get('BUP_TEST_PRUNE_OLDER_SAVES', 2000)) prune_cycles = int(environ.get('BUP_TEST_PRUNE_OLDER_CYCLES', 20)) prune_gc_cycles = int(environ.get('BUP_TEST_PRUNE_OLDER_GC_CYCLES', 10)) with test_tempdir('prune-older-') as tmpdir: environ['BUP_DIR'] = tmpdir + '/work/.git' environ['GIT_DIR'] = tmpdir + '/work/.git' now = int(time()) three_years_ago = now - (60 * 60 * 24 * 366 * 3) chdir(tmpdir) exc(['git', 'init', 'work']) wvstart('generating ' + str(save_population) + ' random saves') chdir(tmpdir + '/work') save_utcs = create_older_random_saves(save_population, three_years_ago, now) chdir(tmpdir) test_set_hash = exo(['git', 'show-ref', '-s', 'master'])[0].rstrip() ls_saves = bup('ls', 'master').splitlines() wvpasseq(save_population + 1, len(ls_saves)) wvstart('ensure everything kept, if no keep arguments') exc(['git', 'reset', '--hard', test_set_hash]) _, errmsg, proc = exo((bup_cmd, 'prune-older', '-v', '--unsafe', '--no-gc', '--wrt', str(now)) \ + ('master',), stdout=False, stderr=True, check=False) wvpassne(proc.returncode, 0) wvpass('at least one keep argument is required' in errmsg) check_prune_result(save_utcs) wvstart('running %d generative no-gc tests on %d saves' % (prune_cycles, save_population)) for spec in unique_period_specs(prune_cycles, # Make it more likely we'll have # some outside the save range. three_years_ago - period_scale['m'], now): exc(['git', 'reset', '--hard', test_set_hash]) expected = sorted(expected_retentions(save_utcs, now, spec)) exc((bup_cmd, 'prune-older', '-v', '--unsafe', '--no-gc', '--wrt', str(now)) \ + period_spec_to_period_args(spec) \ + ('master',)) check_prune_result(expected) # More expensive because we have to recreate the repo each time wvstart('running %d generative gc tests on %d saves' % (prune_gc_cycles, save_population)) exc(['git', 'reset', '--hard', test_set_hash]) copytree('work/.git', 'clean-test-repo', symlinks=True) for spec in unique_period_specs(prune_gc_cycles, # Make it more likely we'll have # some outside the save range. three_years_ago - period_scale['m'], now): rmtree('work/.git') copytree('clean-test-repo', 'work/.git') expected = sorted(expected_retentions(save_utcs, now, spec)) exc((bup_cmd, 'prune-older', '-v', '--unsafe', '--wrt', str(now)) \ + period_spec_to_period_args(spec) \ + ('master',)) check_prune_result(expected) bup-0.29/t/test-redundant-saves.sh000077500000000000000000000031771303127641400171250ustar00rootroot00000000000000#!/usr/bin/env bash # Test that running save more than once with no other changes produces # the exact same tree. # Note: we can't compare the top-level hash (i.e. the output of "save # -t" because that currently pulls the metadata for unindexed parent # directories directly from the filesystem, and the relevant atimes # may change between runs. So instead we extract the roots of the # indexed trees for comparison via t/subtree-hash. . ./wvtest-bup.sh || exit $? set -o pipefail WVSTART 'all' top="$(pwd)" tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$BUP_DIR" bup() { "$top/bup" "$@"; } WVPASS mkdir -p "$tmpdir/src" WVPASS mkdir -p "$tmpdir/src/d" WVPASS mkdir -p "$tmpdir/src/d/e" WVPASS touch "$tmpdir/src/"{f,b,a,d} WVPASS touch "$tmpdir/src/d/z" WVPASS bup init WVPASS bup index -u "$tmpdir/src" declare -a indexed_top IFS=/ indexed_top="${tmpdir##/}" indexed_top=(${indexed_top%%/}) unset IFS tree1=$(WVPASS bup save -t "$tmpdir/src") || exit $? indexed_tree1="$(WVPASS t/subtree-hash "$tree1" "${indexed_top[@]}" src)" \ || exit $? result="$(WVPASS cd "$tmpdir/src"; WVPASS bup index -m)" || exit $? WVPASSEQ "$result" "" tree2=$(WVPASS bup save -t "$tmpdir/src") || exit $? indexed_tree2="$(WVPASS t/subtree-hash "$tree2" "${indexed_top[@]}" src)" \ || exit $? WVPASSEQ "$indexed_tree1" "$indexed_tree2" result="$(WVPASS bup index -s / | WVFAIL grep ^D)" || exit $? WVPASSEQ "$result" "" tree3=$(WVPASS bup save -t /) || exit $? indexed_tree3="$(WVPASS t/subtree-hash "$tree3" "${indexed_top[@]}" src)" || exit $? WVPASSEQ "$indexed_tree1" "$indexed_tree3" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-release-archive.sh000077500000000000000000000017021303127641400170510ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? . config/config.vars.sh set -o pipefail WVPASS git status > /dev/null if ! git diff-index --quiet HEAD; then WVDIE "uncommitted changes; cannot continue" fi top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? bup() { "$top/bup" "$@"; } WVPASS cd "$tmpdir" WVPASS git clone "$top" clone for ver in 11.11 11.11.11; do WVSTART "version $ver" WVPASS cd clone WVPASS git tag "$ver" WVPASS git archive --prefix=bup-"$ver"/ -o "$tmpdir"/bup-"$ver".tgz "$ver" WVPASS cd "$tmpdir" WVPASS tar xzf bup-"$ver".tgz WVPASS cd bup-"$ver" WVPASS "$bup_make" WVPASSEQ "$ver" "$(./bup version)" WVPASS cd "$tmpdir" done WVSTART 'make check in unpacked archive' WVPASS cd bup-11.11.11 if ! "$bup_make" -j5 check > archive-tests.log 2>&1; then cat archive-tests.log 1>&2 WVPASS false fi WVPASS cd "$top" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-restore-map-owner.sh000077500000000000000000000065201303127641400174030ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? root_status="$(t/root-status)" || exit $? if [ "$root_status" != root ]; then echo 'Not root: skipping restore --map-* tests.' exit 0 # FIXME: add WVSKIP. fi top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } uid=$(WVPASS id -u) || exit $? user=$(WVPASS id -un) || exit $? gid=$(WVPASS id -g) || exit $? group=$(WVPASS id -gn) || exit $? other_uinfo=$(WVPASS t/id-other-than --user "$user") || exit $? other_user="${other_uinfo%%:*}" other_uid="${other_uinfo##*:}" other_ginfo=$(WVPASS t/id-other-than --group "$group" 0) || exit $? other_group="${other_ginfo%%:*}" other_gid="${other_ginfo##*:}" WVPASS bup init WVPASS cd "$tmpdir" WVSTART "restore --map-user/group/uid/gid (control)" WVPASS mkdir src WVPASS touch src/foo # Some systems assign the parent dir group to new paths. WVPASS chgrp -R "$group" src WVPASS bup index src WVPASS bup save -n src src WVPASS bup restore -C dest "src/latest/$(pwd)/src/" WVPASS bup xstat dest/foo > foo-xstat WVPASS grep -qE "^user: $user\$" foo-xstat WVPASS grep -qE "^uid: $uid\$" foo-xstat WVPASS grep -qE "^group: $group\$" foo-xstat WVPASS grep -qE "^gid: $gid\$" foo-xstat WVSTART "restore --map-user/group/uid/gid (user/group)" WVPASS rm -rf dest # Have to remap uid/gid too because we're root and 0 would win). WVPASS bup restore -C dest \ --map-uid "$uid=$other_uid" --map-gid "$gid=$other_gid" \ --map-user "$user=$other_user" --map-group "$group=$other_group" \ "src/latest/$(pwd)/src/" WVPASS bup xstat dest/foo > foo-xstat WVPASS grep -qE "^user: $other_user\$" foo-xstat WVPASS grep -qE "^uid: $other_uid\$" foo-xstat WVPASS grep -qE "^group: $other_group\$" foo-xstat WVPASS grep -qE "^gid: $other_gid\$" foo-xstat WVSTART "restore --map-user/group/uid/gid (user/group trumps uid/gid)" WVPASS rm -rf dest WVPASS bup restore -C dest \ --map-uid "$uid=$other_uid" --map-gid "$gid=$other_gid" \ "src/latest/$(pwd)/src/" # Should be no changes. WVPASS bup xstat dest/foo > foo-xstat WVPASS grep -qE "^user: $user\$" foo-xstat WVPASS grep -qE "^uid: $uid\$" foo-xstat WVPASS grep -qE "^group: $group\$" foo-xstat WVPASS grep -qE "^gid: $gid\$" foo-xstat WVSTART "restore --map-user/group/uid/gid (uid/gid)" WVPASS rm -rf dest WVPASS bup restore -C dest \ --map-user "$user=" --map-group "$group=" \ --map-uid "$uid=$other_uid" --map-gid "$gid=$other_gid" \ "src/latest/$(pwd)/src/" WVPASS bup xstat dest/foo > foo-xstat WVPASS grep -qE "^user: $other_user\$" foo-xstat WVPASS grep -qE "^uid: $other_uid\$" foo-xstat WVPASS grep -qE "^group: $other_group\$" foo-xstat WVPASS grep -qE "^gid: $other_gid\$" foo-xstat has_uid_gid_0=$(WVPASS bup-python -c " import grp, pwd try: pwd.getpwuid(0) grp.getgrgid(0) print 'yes' except KeyError, ex: pass " 2>/dev/null) || exit $? if [ "$has_uid_gid_0" == yes ] then WVSTART "restore --map-user/group/uid/gid (zero uid/gid trumps all)" WVPASS rm -rf dest WVPASS bup restore -C dest \ --map-user "$user=$other_user" --map-group "$group=$other_group" \ --map-uid "$uid=0" --map-gid "$gid=0" \ "src/latest/$(pwd)/src/" WVPASS bup xstat dest/foo > foo-xstat WVPASS grep -qE "^uid: 0\$" foo-xstat WVPASS grep -qE "^gid: 0\$" foo-xstat WVPASS rm -rf "$tmpdir" fi bup-0.29/t/test-restore-single-file.sh000077500000000000000000000013041303127641400176670ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail WVSTART 'all' top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS mkdir "$tmpdir/foo" WVPASS mkdir "$tmpdir/foo/bar" # Make sure a dir sorts before baz (regression test). WVPASS touch "$tmpdir/foo/baz" WVPASS WVPASS bup init WVPASS WVPASS bup index "$tmpdir/foo" WVPASS bup save -n foo "$tmpdir/foo" # Make sure the timestamps will differ if metadata isn't being restored. WVPASS bup tick WVPASS bup restore -C "$tmpdir/restore" "foo/latest/$tmpdir/foo/baz" WVPASS "$top/t/compare-trees" "$tmpdir/foo/baz" "$tmpdir/restore/baz" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-rm-between-index-and-save.sh000077500000000000000000000040601303127641400206600ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" D="$tmpdir/data" bup() { "$top/bup" "$@"; } WVSTART "remove file" # Fixed in commit 8585613c1f45f3e20feec00b24fc7e3a948fa23e ("Store # metadata in the index....") WVPASS mkdir "$D" WVPASS bup init WVPASS echo "content" > "$D"/foo WVPASS echo "content" > "$D"/bar WVPASS bup tick WVPASS bup index -ux "$D" WVPASS bup save -n save-fail-missing "$D" WVPASS echo "content" > "$D"/baz WVPASS bup tick WVPASS bup index -ux "$D" WVPASS rm "$D"/foo # When "bup tick" is removed above, this may fail (complete with warning), # since the ctime/mtime of "foo" might be pushed back: WVPASS bup save -n save-fail-missing "$D" # when the save-call failed, foo is missing from output, since only # then bup notices, that it was removed: WVPASSEQ "$(bup ls -A save-fail-missing/latest/$TOP/$D/)" "bar baz foo" # index/save again WVPASS bup tick WVPASS bup index -ux "$D" WVPASS bup save -n save-fail-missing "$D" # now foo is gone: WVPASSEQ "$(bup ls -A save-fail-missing/latest/$TOP/$D/)" "bar baz" # TODO: Test for racecondition between reading a file and reading its metadata? WVSTART "remove dir" WVPASS rm -r "$D" WVPASS mkdir "$D" WVPASS rm -r "$BUP_DIR" WVPASS bup init WVPASS mkdir "$D"/foo WVPASS mkdir "$D"/bar WVPASS bup tick WVPASS bup index -ux "$D" WVPASS bup save -n save-fail-missing "$D" WVPASS touch "$D"/bar WVPASS mkdir "$D"/baz WVPASS bup tick WVPASS bup index -ux "$D" WVPASS rmdir "$D"/foo # with directories, bup notices that foo is missing, so it fails # (complete with delayed error) WVFAIL bup save -n save-fail-missing "$D" # ...but foo is still saved since it was just fine in the index WVPASSEQ "$(bup ls -AF save-fail-missing/latest/$TOP/$D/)" "bar/ baz/ foo/" # Index again: WVPASS bup tick WVPASS bup index -ux "$D" # no non-zero-exitcode anymore: WVPASS bup save -n save-fail-missing "$D" # foo is now gone WVPASSEQ "$(bup ls -AF save-fail-missing/latest/$TOP/$D/)" "bar/ baz/" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-rm.sh000077500000000000000000000170301303127641400144310ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . ./t/lib.sh || exit $? top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } compare-trees() { "$top/t/compare-trees" "$@"; } wv_matches_rx() { local caller_file=${BASH_SOURCE[0]} local caller_line=${BASH_LINENO[0]} local src="$caller_file:$caller_line" if test $# -ne 2; then echo "! $src wv_matches_rx requires 2 arguments FAILED" 1>&2 return fi local str="$1" local rx="$2" echo "Matching:" 1>&2 || exit $? echo "$str" | sed 's/^\(.*\)/ \1/' 1>&2 || exit $? echo "Against:" 1>&2 || exit $? echo "$rx" | sed 's/^\(.*\)/ \1/' 1>&2 || exit $? if [[ "$str" =~ $rx ]]; then echo "! $src regex matches ok" 1>&2 || exit $? else echo "! $src regex doesn't match FAILED" 1>&2 || exit $? fi } WVPASS bup init WVPASS cd "$tmpdir" WVSTART "rm /foo (lone branch)" WVPASS mkdir src src/foo WVPASS echo twisty-maze > src/1 WVPASS bup index src WVPASS bup save -n src src WVPASS "$top"/t/sync-tree bup/ bup-baseline/ # FIXME: test -n WVPASS bup tick # Make sure we always get the timestamp changes below WVPASS bup rm --unsafe /src wv_matches_rx "$(compare-trees bup/ bup-baseline/)" \ '\*deleting[ ]+logs/refs/heads/src \*deleting[ ]+refs/heads/src \.d\.\.t\.\.\.[.]*[ ]+logs/refs/heads/ \.d\.\.t\.\.\.[.]*[ ]+refs/heads/' WVSTART "rm /foo (one of many)" WVPASS rm -rf bup WVPASS mv bup-baseline bup WVPASS echo twisty-maze > src/2 WVPASS bup index src WVPASS bup save -n src-2 src WVPASS echo twisty-maze > src/3 WVPASS bup index src WVPASS bup save -n src-3 src WVPASS "$top"/t/sync-tree bup/ bup-baseline/ WVPASS bup tick # Make sure we always get the timestamp changes below WVPASS bup rm --unsafe /src wv_matches_rx "$(compare-trees bup/ bup-baseline/)" \ "\*deleting[ ]+logs/refs/heads/src \*deleting[ ]+refs/heads/src \.d\.\.t\.\.\.[.]*[ ]+logs/refs/heads/ \.d\.\.t\.\.\.[.]*[ ]+refs/heads/" WVSTART "rm /foo /bar (multiple of many)" WVPASS rm -rf bup WVPASS mv bup-baseline bup WVPASS echo twisty-maze > src/4 WVPASS bup index src WVPASS bup save -n src-4 src WVPASS echo twisty-maze > src/5 WVPASS bup index src WVPASS bup save -n src-5 src WVPASS "$top"/t/sync-tree bup/ bup-baseline/ WVPASS bup tick # Make sure we always get the timestamp changes below WVPASS bup rm --unsafe /src-2 /src-4 wv_matches_rx "$(compare-trees bup/ bup-baseline/)" \ "\*deleting[ ]+logs/refs/heads/src-4 \*deleting[ ]+logs/refs/heads/src-2 \*deleting[ ]+refs/heads/src-4 \*deleting[ ]+refs/heads/src-2 \.d\.\.t\.\.\.[.]*[ ]+logs/refs/heads/ \.d\.\.t\.\.\.[.]*[ ]+refs/heads/" WVSTART "rm /foo /bar (all)" WVPASS rm -rf bup WVPASS mv bup-baseline bup WVPASS "$top"/t/sync-tree bup/ bup-baseline/ WVPASS bup tick # Make sure we always get the timestamp changes below WVPASS bup rm --unsafe /src /src-2 /src-3 /src-4 /src-5 wv_matches_rx "$(compare-trees bup/ bup-baseline/)" \ "\*deleting[ ]+logs/refs/heads/src-5 \*deleting[ ]+logs/refs/heads/src-4 \*deleting[ ]+logs/refs/heads/src-3 \*deleting[ ]+logs/refs/heads/src-2 \*deleting[ ]+logs/refs/heads/src \*deleting[ ]+refs/heads/src-5 \*deleting[ ]+refs/heads/src-4 \*deleting[ ]+refs/heads/src-3 \*deleting[ ]+refs/heads/src-2 \*deleting[ ]+refs/heads/src \.d\.\.t\.\.\.[.]*[ ]+logs/refs/heads/ \.d\.\.t\.\.\.[.]*[ ]+refs/heads/" WVSTART "rm /foo/bar (lone save - equivalent to rm /foo)" WVPASS rm -rf bup bup-baseline src WVPASS bup init WVPASS mkdir src WVPASS echo twisty-maze > src/1 WVPASS bup index src WVPASS bup save -n src src WVPASS bup ls src > tmp-ls save1="$(WVPASS head -n 1 tmp-ls)" || exit $? WVPASS "$top"/t/sync-tree bup/ bup-baseline/ WVPASS bup tick # Make sure we always get the timestamp changes below WVFAIL bup rm --unsafe /src/latest WVPASS bup rm --unsafe /src/"$save1" wv_matches_rx "$(compare-trees bup/ bup-baseline/)" \ "\*deleting[ ]+logs/refs/heads/src \*deleting[ ]+refs/heads/src \.d\.\.t\.\.\.[.]*[ ]+logs/refs/heads/ \.d\.\.t\.\.\.[.]*[ ]+refs/heads/" verify-changes-caused-by-rewriting-save() { local before="$1" after="$2" tmpdir tmpdir="$(WVPASS wvmktempdir)" || exit $? (WVPASS cd "$before" && WVPASS find . | WVPASS sort) \ > "$tmpdir/before" || exit $? (WVPASS cd "$after" && WVPASS find . | WVPASS sort) \ > "$tmpdir/after" || exit $? local new_paths new_idx new_pack observed new_paths="$(WVPASS comm -13 "$tmpdir/before" "$tmpdir/after")" || exit $? new_idx="$(echo "$new_paths" | WVPASS grep -E '^\./objects/pack/pack-.*\.idx$' | cut -b 3-)" || exit $? new_pack="$(echo "$new_paths" | WVPASS grep -E '^\./objects/pack/pack-.*\.pack$' | cut -b 3-)" || exit $? wv_matches_rx "$(compare-trees "$after/" "$before/")" \ ">fcst\.\.\.[.]*[ ]+logs/refs/heads/src \.d\.\.t\.\.\.[.]*[ ]+objects/ \.d\.\.t\.\.\.[.]*[ ]+objects/pack/ >fcst\.\.\.[.]*[ ]+objects/pack/bup\.bloom >f\+\+\+\+\+\+\+[+]*[ ]+$new_idx >f\+\+\+\+\+\+\+[+]*[ ]+$new_pack \.d\.\.t\.\.\.[.]*[ ]+refs/heads/ >fc\.t\.\.\.[.]*[ ]+refs/heads/src" WVPASS rm -rf "$tmpdir" } commit-hash-n() { local n="$1" repo="$2" branch="$3" GIT_DIR="$repo" WVPASS git rev-list --reverse "$branch" \ | WVPASS awk "FNR == $n" } rm-safe-cinfo() { local n="$1" repo="$2" branch="$3" hash hash="$(commit-hash-n "$n" "$repo" "$branch")" || exit $? local fmt='Tree: %T%n' fmt="${fmt}Author: %an <%ae> %ai%n" fmt="${fmt}Committer: %cn <%ce> %ci%n" fmt="${fmt}%n%s%n%b" GIT_DIR="$repo" WVPASS git log -n1 --pretty=format:"$fmt" "$hash" } WVSTART 'rm /foo/BAR (setup)' WVPASS rm -rf bup bup-baseline src WVPASS bup init WVPASS mkdir src WVPASS echo twisty-maze > src/1 WVPASS bup index src WVPASS bup save -n src src WVPASS echo twisty-maze > src/2 WVPASS bup index src WVPASS bup tick WVPASS bup save -n src src WVPASS echo twisty-maze > src/3 WVPASS bup index src WVPASS bup tick WVPASS bup save -n src src WVPASS mv bup bup-baseline WVPASS bup tick # Make sure we always get the timestamp changes below WVSTART "rm /foo/BAR (first of many)" WVPASS "$top"/t/sync-tree bup-baseline/ bup/ WVPASS bup ls src > tmp-ls victim="$(WVPASS head -n 1 tmp-ls)" || exit $? WVPASS bup rm --unsafe /src/"$victim" verify-changes-caused-by-rewriting-save bup-baseline bup observed=$(WVPASS git rev-list src | WVPASS wc -l) || exit $? WVPASSEQ 2 $observed WVPASSEQ "$(rm-safe-cinfo 1 bup src)" "$(rm-safe-cinfo 2 bup-baseline src)" WVPASSEQ "$(rm-safe-cinfo 2 bup src)" "$(rm-safe-cinfo 3 bup-baseline src)" WVSTART "rm /foo/BAR (one of many)" WVPASS "$top"/t/sync-tree bup-baseline/ bup/ victim="$(WVPASS bup ls src | tail -n +2 | head -n 1)" || exit $? WVPASS bup rm --unsafe /src/"$victim" verify-changes-caused-by-rewriting-save bup-baseline bup observed=$(git rev-list src | wc -l) || exit $? WVPASSEQ 2 $observed WVPASSEQ "$(commit-hash-n 1 bup src)" "$(commit-hash-n 1 bup-baseline src)" WVPASSEQ "$(rm-safe-cinfo 2 bup src)" "$(rm-safe-cinfo 3 bup-baseline src)" WVSTART "rm /foo/BAR (last of many)" WVPASS "$top"/t/sync-tree bup-baseline/ bup/ victim="$(WVPASS bup ls src | tail -n 2 | head -n 1)" || exit $? WVPASS bup rm --unsafe -vv /src/"$victim" wv_matches_rx "$(compare-trees bup/ bup-baseline/)" \ ">fcst\.\.\.[.]*[ ]+logs/refs/heads/src \.d\.\.t\.\.\.[.]*[ ]+refs/heads/ >fc\.t\.\.\.[.]*[ ]+refs/heads/src" observed=$(git rev-list src | wc -l) || exit $? WVPASSEQ 2 $observed WVPASSEQ "$(commit-hash-n 1 bup src)" "$(commit-hash-n 1 bup-baseline src)" WVPASSEQ "$(commit-hash-n 2 bup src)" "$(commit-hash-n 2 bup-baseline src)" # FIXME: test that committer changes when rewriting, when appropriate WVPASS rm -rf "$tmpdir" bup-0.29/t/test-save-creates-no-unrefs.sh000077500000000000000000000007751303127641400203170ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? WVSTART 'all' top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$BUP_DIR" bup() { "$top/bup" "$@"; } WVPASS mkdir -p "$tmpdir/src" WVPASS touch "$tmpdir/src/foo" WVPASS bup init WVPASS bup index "$tmpdir/src" WVPASS bup save -n src "$tmpdir/src" WVPASSEQ "$(git fsck --unreachable)" "" WVPASS bup save -n src "$tmpdir/src" WVPASSEQ "$(git fsck --unreachable)" "" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-save-restore-excludes.sh000077500000000000000000000164431303127641400202530ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS cd "$tmpdir" WVSTART "index excludes bupdir" WVPASS force-delete src "$BUP_DIR" WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS bup random 128k >src/b WVPASS mkdir src/d src/d/e WVPASS bup random 512 >src/f WVPASS bup index -ux src WVPASS bup save -n exclude-bupdir src WVPASSEQ "$(bup ls -AF "exclude-bupdir/latest/$tmpdir/src/")" "a b d/ f" WVSTART "index --exclude" WVPASS force-delete src "$BUP_DIR" WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS bup random 128k >src/b WVPASS mkdir src/d src/d/e WVPASS bup random 512 >src/f WVPASS bup random 512 >src/j WVPASS bup index -ux --exclude src/d --exclude src/j src WVPASS bup save -n exclude src WVPASSEQ "$(bup ls "exclude/latest/$tmpdir/src/")" "a b f" WVPASS mkdir src/g src/h WVPASS bup index -ux --exclude src/d --exclude $tmpdir/src/g --exclude src/h \ --exclude "$tmpdir/src/j" src WVPASS bup save -n exclude src WVPASSEQ "$(bup ls "exclude/latest/$tmpdir/src/")" "a b f" WVSTART "index --exclude-from" WVPASS force-delete src "$BUP_DIR" WVPASS bup init WVPASS mkdir src WVPASS echo "src/d $tmpdir/src/g src/h src/i" > exclude-list WVPASS touch src/a WVPASS bup random 128k >src/b WVPASS mkdir src/d src/d/e WVPASS bup random 512 >src/f WVPASS mkdir src/g src/h WVPASS bup random 128k > src/i WVPASS bup index -ux --exclude-from exclude-list src WVPASS bup save -n exclude-from src WVPASSEQ "$(bup ls "exclude-from/latest/$tmpdir/src/")" "a b f" WVPASS rm exclude-list # bup index --exclude-rx ... # ========================== WVSTART "index --exclude-rx '^/foo' (root anchor)" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS mkdir src/sub1 WVPASS mkdir src/sub2 WVPASS touch src/sub1/a WVPASS touch src/sub2/b WVPASS bup index -u src --exclude-rx "^$(pwd)/src/sub1/" WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./sub2 ./sub2/b" WVSTART "index --exclude-rx '/foo$' (non-dir, tail anchor)" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS touch src/foo WVPASS mkdir src/sub WVPASS mkdir src/sub/foo WVPASS touch src/sub/foo/a WVPASS bup index -u src --exclude-rx '/foo$' WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./sub ./sub/foo ./sub/foo/a" WVSTART "index --exclude-rx '/foo/$' (dir, tail anchor)" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS touch src/foo WVPASS mkdir src/sub WVPASS mkdir src/sub/foo WVPASS touch src/sub/foo/a WVPASS bup index -u src --exclude-rx '/foo/$' WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./foo ./sub" WVSTART "index --exclude-rx '/foo/.' (dir content)" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS touch src/foo WVPASS mkdir src/sub WVPASS mkdir src/sub/foo WVPASS touch src/sub/foo/a WVPASS bup index -u src --exclude-rx '/foo/.' WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./foo ./sub ./sub/foo" # bup index --exclude-rx-from ... # =============================== WVSTART "index --exclude-rx-from" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS mkdir src/sub1 WVPASS mkdir src/sub2 WVPASS touch src/sub1/a WVPASS touch src/sub2/b # exclude-rx-file includes blank lines to check that we ignore them. WVPASS echo "^$(pwd)/src/sub1/ " > exclude-rx-file WVPASS bup index -u src --exclude-rx-from exclude-rx-file WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./sub2 ./sub2/b" # bup restore --exclude-rx ... # ============================ WVSTART "restore --exclude-rx '^/foo' (root anchor)" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS mkdir src/sub1 WVPASS mkdir src/sub2 WVPASS touch src/sub1/a WVPASS touch src/sub2/b WVPASS bup index -u src WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp --exclude-rx "^/sub1/" /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./sub2 ./sub2/b" WVSTART "restore --exclude-rx '/foo$' (non-dir, tail anchor)" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS touch src/foo WVPASS mkdir src/sub WVPASS mkdir src/sub/foo WVPASS touch src/sub/foo/a WVPASS bup index -u src WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp --exclude-rx '/foo$' /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./sub ./sub/foo ./sub/foo/a" WVSTART "restore --exclude-rx '/foo/$' (dir, tail anchor)" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS touch src/foo WVPASS mkdir src/sub WVPASS mkdir src/sub/foo WVPASS touch src/sub/foo/a WVPASS bup index -u src WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp --exclude-rx '/foo/$' /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./foo ./sub" WVSTART "restore --exclude-rx '/foo/.' (dir content)" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS touch src/foo WVPASS mkdir src/sub WVPASS mkdir src/sub/foo WVPASS touch src/sub/foo/a WVPASS bup index -u src WVPASS bup save --strip -n bupdir src WVPASS bup restore -C buprestore.tmp --exclude-rx '/foo/.' /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./foo ./sub ./sub/foo" # bup restore --exclude-rx-from ... # ================================= WVSTART "restore --exclude-rx-from" WVPASS rm -rf src "$BUP_DIR" buprestore.tmp WVPASS bup init WVPASS mkdir src WVPASS touch src/a WVPASS touch src/b WVPASS mkdir src/sub1 WVPASS mkdir src/sub2 WVPASS touch src/sub1/a WVPASS touch src/sub2/b WVPASS bup index -u src WVPASS bup save --strip -n bupdir src WVPASS echo "^/sub1/" > exclude-rx-file WVPASS bup restore -C buprestore.tmp \ --exclude-rx-from exclude-rx-file /bupdir/latest/ actual="$(WVPASS cd buprestore.tmp; WVPASS find . | WVPASS sort)" || exit $? WVPASSEQ "$actual" ". ./a ./b ./sub2 ./sub2/b" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-save-strip-graft.sh000077500000000000000000000117071303127641400172160ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } compare-trees() { "$top/t/compare-trees" "$@"; } WVPASS cd "$tmpdir" WVSTART "save --strip" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save --strip -n foo src/x/y WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/x/y/ restore/latest/ WVSTART "save --strip-path (relative)" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save --strip-path src -n foo src/x WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/ restore/latest/ WVSTART "save --strip-path (absolute)" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save --strip-path "$tmpdir" -n foo src WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/ "restore/latest/src/" WVSTART "save --strip-path (no match)" if test $(WVPASS path-filesystems . | WVPASS sort -u | WVPASS wc -l) -ne 1 then # Skip the test because the attempt to restore parent dirs to the # current filesystem may fail -- i.e. running from # /foo/ext4/bar/btrfs will fail when bup tries to restore linux # attrs above btrfs to the restore tree *inside* btrfs. # FIXME: add WVSKIP echo "(running from tree with mixed filesystems; skipping test)" 1>&2 exit 0 else WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save --strip-path foo -n foo src/x WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/ "restore/latest/$tmpdir/src/" fi WVSTART "save --graft (empty graft points disallowed)" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir src WVFAIL bup save --graft =/grafted -n graft-point-absolute src 2>&1 \ | WVPASS grep 'error: a graft point cannot be empty' WVFAIL bup save --graft $top/$tmp= -n graft-point-absolute src 2>&1 \ | WVPASS grep 'error: a graft point cannot be empty' WVSTART "save --graft /x/y=/a/b (relative paths)" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save --graft src=x -n foo src WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/ "restore/latest/$tmpdir/x/" WVSTART "save --graft /x/y=/a/b (matching structure)" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save -v --graft "$tmpdir/src/x/y=$tmpdir/src/a/b" -n foo src/x/y WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/x/y/ "restore/latest/$tmpdir/src/a/b/" WVSTART "save --graft /x/y=/a (shorter target)" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save -v --graft "$tmpdir/src/x/y=/a" -n foo src/x/y WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/x/y/ "restore/latest/a/" WVSTART "save --graft /x=/a/b (longer target)" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save -v --graft "$tmpdir/src=$tmpdir/src/a/b/c" -n foo src WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/ "restore/latest/$tmpdir/src/a/b/c/" WVSTART "save --graft /x=/ (root target)" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/y/z WVPASS bup random 8k > src/x/y/random-1 WVPASS bup random 8k > src/x/y/z/random-2 WVPASS bup index -u src WVPASS bup save -v --graft "$tmpdir/src/x=/" -n foo src/x WVPASS bup restore -C restore /foo/latest WVPASS compare-trees src/x/ "restore/latest/" #WVSTART "save --graft /=/x/ (root source)" # FIXME: Not tested for now -- will require cleverness, or caution as root. WVSTART "save collision" WVPASS force-delete "$BUP_DIR" src restore WVPASS bup init WVPASS mkdir -p src/x/1 src/y/1 WVPASS bup index -u src WVFAIL bup save --strip -n foo src/x src/y 2> tmp-err.log WVPASS grep -F "error: ignoring duplicate path '1' in '/'" tmp-err.log WVPASS rm -rf "$tmpdir" bup-0.29/t/test-save-with-valid-parent.sh000077500000000000000000000016641303127641400203140ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } compare-trees() { "$top/t/compare-trees" "$@"; } WVPASS cd "$tmpdir" # Make sure that we can explicitly save a path whose parent is up to # date. WVSTART "save path with up to date parent" WVPASS bup init WVPASS mkdir -p src/a src/b WVPASS touch src/a/1 src/b/2 WVPASS bup index -u src WVPASS bup save -n src src WVPASS bup save -n src src/b WVPASS bup restore -C restore "src/latest/$(pwd)/" WVPASS test ! -e restore/src/a WVPASS "$top/t/compare-trees" -c src/b/ restore/src/b/ WVPASS bup save -n src src/a/1 WVPASS rm -r restore WVPASS bup restore -C restore "src/latest/$(pwd)/" WVPASS test ! -e restore/src/b WVPASS "$top/t/compare-trees" -c src/a/ restore/src/a/ WVPASS rm -rf "$tmpdir" bup-0.29/t/test-sparse-files.sh000077500000000000000000000126561303127641400164210ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail mb=1048576 top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? readonly mb top tmpdir export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS cd "$tmpdir" # The 3MB guess is semi-arbitrary, but we've been informed that # Lustre, for example, uses 1MB, so guess higher than that, at least. block_size=$(bup-python -c \ "import os; print getattr(os.stat('.'), 'st_blksize', 0) or $mb * 3") \ || exit $? data_size=$((block_size * 10)) readonly block_size data_size WVPASS dd if=/dev/zero of=test-sparse-probe seek="$data_size" bs=1 count=1 probe_size=$(WVPASS du -k -s test-sparse-probe | WVPASS cut -f1) || exit $? if [ "$probe_size" -ge "$((data_size / 1024))" ]; then WVSTART "no sparse support detected -- skipping tests" exit 0 fi WVSTART "sparse restore on $(current-filesystem), assuming ${block_size}B blocks" WVPASS bup init WVPASS mkdir src WVPASS dd if=/dev/zero of=src/foo seek="$data_size" bs=1 count=1 WVPASS bup index src WVPASS bup save -n src src WVSTART "sparse file restore (all sparse)" WVPASS bup restore -C restore "src/latest/$(pwd)/" restore_size=$(WVPASS du -k -s restore | WVPASS cut -f1) || exit $? WVPASS [ "$restore_size" -ge "$((data_size / 1024))" ] WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --no-sparse (all sparse)" WVPASS rm -r restore WVPASS bup restore --no-sparse -C restore "src/latest/$(pwd)/" restore_size=$(WVPASS du -k -s restore | WVPASS cut -f1) || exit $? WVPASS [ "$restore_size" -ge "$((data_size / 1024))" ] WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --sparse (all sparse)" WVPASS rm -r restore WVPASS bup restore --sparse -C restore "src/latest/$(pwd)/" restore_size=$(WVPASS du -k -s restore | WVPASS cut -f1) || exit $? WVPASS [ "$restore_size" -le "$((3 * (block_size / 1024)))" ] WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --sparse (sparse end)" WVPASS echo "start" > src/foo WVPASS dd if=/dev/zero of=src/foo seek="$data_size" bs=1 count=1 conv=notrunc WVPASS bup index src WVPASS bup save -n src src WVPASS rm -r restore WVPASS bup restore --sparse -C restore "src/latest/$(pwd)/" restore_size=$(WVPASS du -k -s restore | WVPASS cut -f1) || exit $? WVPASS [ "$restore_size" -le "$((3 * (block_size / 1024)))" ] WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --sparse (sparse middle)" WVPASS echo "end" >> src/foo WVPASS bup index src WVPASS bup save -n src src WVPASS rm -r restore WVPASS bup restore --sparse -C restore "src/latest/$(pwd)/" restore_size=$(WVPASS du -k -s restore | WVPASS cut -f1) || exit $? WVPASS [ "$restore_size" -le "$((5 * (block_size / 1024)))" ] WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --sparse (bracketed zero run in buf)" WVPASS echo 'x' > src/foo WVPASS dd if=/dev/zero bs=1 count=512 >> src/foo WVPASS echo 'y' >> src/foo WVPASS bup index src WVPASS bup save -n src src WVPASS rm -r restore WVPASS bup restore --sparse -C restore "src/latest/$(pwd)/" WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --sparse (sparse start)" WVPASS dd if=/dev/zero of=src/foo seek="$data_size" bs=1 count=1 WVPASS echo "end" >> src/foo WVPASS bup index src WVPASS bup save -n src src WVPASS rm -r restore WVPASS bup restore --sparse -C restore "src/latest/$(pwd)/" restore_size=$(WVPASS du -k -s restore | WVPASS cut -f1) || exit $? WVPASS [ "$restore_size" -le "$((5 * (block_size / 1024)))" ] WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --sparse (sparse start and end)" WVPASS dd if=/dev/zero of=src/foo seek="$data_size" bs=1 count=1 WVPASS echo "middle" >> src/foo WVPASS dd if=/dev/zero of=src/foo seek=$((2 * data_size)) bs=1 count=1 conv=notrunc WVPASS bup index src WVPASS bup save -n src src WVPASS rm -r restore WVPASS bup restore --sparse -C restore "src/latest/$(pwd)/" restore_size=$(WVPASS du -k -s restore | WVPASS cut -f1) || exit $? WVPASS [ "$restore_size" -le "$((5 * (block_size / 1024)))" ] WVPASS "$top/t/compare-trees" -c src/ restore/src/ if test "$block_size" -gt $mb; then random_size="$block_size" else random_size=1M fi WVSTART "sparse file restore --sparse (random $random_size)" WVPASS bup random --seed "$RANDOM" 1M > src/foo WVPASS bup index src WVPASS bup save -n src src WVPASS rm -r restore WVPASS bup restore --sparse -C restore "src/latest/$(pwd)/" WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --sparse (random sparse regions)" WVPASS rm -rf "$BUP_DIR" src WVPASS bup init WVPASS mkdir src for sparse_dataset in 0 1 2 3 4 5 6 7 8 9 do WVPASS "$top/t/sparse-test-data" "src/foo-$sparse_dataset" done WVPASS bup index src WVPASS bup save -n src src WVPASS rm -r restore WVPASS bup restore --sparse -C restore "src/latest/$(pwd)/" WVPASS "$top/t/compare-trees" -c src/ restore/src/ WVSTART "sparse file restore --sparse (short zero runs around boundary)" WVPASS bup-python > src/foo <a.tmp WVPASS echo b >b.tmp WVPASS bup split -b a.tmp >taga.tmp WVPASS bup split -b b.tmp >tagb.tmp WVPASS cat a.tmp b.tmp | WVPASS bup split -b >tagab.tmp WVPASSEQ $(cat taga.tmp | wc -l) 1 WVPASSEQ $(cat tagb.tmp | wc -l) 1 WVPASSEQ $(cat tagab.tmp | wc -l) 1 WVPASSEQ $(cat tag[ab].tmp | wc -l) 2 WVPASSEQ "$(bup split -b a.tmp b.tmp)" "$(cat tagab.tmp)" WVPASSEQ "$(bup split -b --keep-boundaries a.tmp b.tmp)" "$(cat tag[ab].tmp)" WVPASSEQ "$(cat tag[ab].tmp | bup split -b --keep-boundaries --git-ids)" \ "$(cat tag[ab].tmp)" WVPASSEQ "$(cat tag[ab].tmp | bup split -b --git-ids)" \ "$(cat tagab.tmp)" WVPASS bup split --bench -b <"$top/t/testfile1" >tags1.tmp WVPASS bup split -vvvv -b "$top/t/testfile2" >tags2.tmp WVPASS echo -n "" | WVPASS bup split -n split_empty_string.tmp WVPASS bup margin WVPASS bup midx -f WVPASS bup midx --check -a WVPASS bup midx -o "$BUP_DIR/objects/pack/test1.midx" \ "$BUP_DIR"/objects/pack/*.idx WVPASS bup midx --check -a WVPASS bup midx -o "$BUP_DIR"/objects/pack/test1.midx \ "$BUP_DIR"/objects/pack/*.idx \ "$BUP_DIR"/objects/pack/*.idx WVPASS bup midx --check -a all=$(echo "$BUP_DIR"/objects/pack/*.idx "$BUP_DIR"/objects/pack/*.midx) WVPASS bup midx -o "$BUP_DIR"/objects/pack/zzz.midx $all WVPASS bup tick WVPASS bup midx -o "$BUP_DIR"/objects/pack/yyy.midx $all WVPASS bup midx -a WVPASSEQ "$(echo "$BUP_DIR"/objects/pack/*.midx)" \ ""$BUP_DIR"/objects/pack/yyy.midx" WVPASS bup margin WVPASS bup split -t "$top/t/testfile2" >tags2t.tmp WVPASS bup split -t "$top/t/testfile2" --fanout 3 >tags2tf.tmp WVPASS bup split -r "$BUP_DIR" -c "$top/t/testfile2" >tags2c.tmp WVPASS bup split -r ":$BUP_DIR" -c "$top/t/testfile2" >tags2c.tmp WVPASS ls -lR \ | WVPASS bup split -r ":$BUP_DIR" -c --fanout 3 --max-pack-objects 3 -n lslr \ || exit $? WVPASS bup ls WVFAIL bup ls /does-not-exist WVPASS bup ls /lslr WVPASS bup ls /lslr/latest WVPASS bup ls /lslr/latest/ #WVPASS bup ls /lslr/1971-01-01 # all dates always exist WVFAIL diff -u tags1.tmp tags2.tmp # fanout must be different from non-fanout WVFAIL diff tags2t.tmp tags2tf.tmp WVPASS wc -c "$top/t/testfile1" "$top/t/testfile2" WVPASS wc -l tags1.tmp tags2.tmp WVSTART "join" WVPASS bup join $(cat tags1.tmp) >out1.tmp WVPASS bup join out2.tmp WVPASS bup join out2c.tmp WVPASS bup join -r ":$BUP_DIR" out2c.tmp WVPASS diff -u "$top/t/testfile1" out1.tmp WVPASS diff -u "$top/t/testfile2" out2.tmp WVPASS diff -u "$top/t/testfile2" out2t.tmp WVPASS diff -u "$top/t/testfile2" out2c.tmp WVPASSEQ "$(bup join split_empty_string.tmp)" "" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-tz.sh000077500000000000000000000010771303127641400144540ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVSTART "half hour TZ" export TZ=ACDT-10:30 WVPASS bup init WVPASS cd "$tmpdir" WVPASS mkdir src WVPASS bup index src WVPASS bup save -n src -d 1420164180 src WVPASSEQ "$(WVPASS git cat-file commit src | sed -ne 's/^author .*> //p')" \ "1420164180 +1030" WVPASSEQ "$(WVPASS bup ls /src)" \ "2015-01-02-123300 latest" WVPASS rm -rf "$tmpdir" bup-0.29/t/test-web.sh000077500000000000000000000030541303127641400145710ustar00rootroot00000000000000#!/usr/bin/env bash . wvtest-bup.sh || exit $? . t/lib.sh || exit $? set -o pipefail TOP="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$TOP/bup" "$@" } wait-for-server-start() { curl --unix-socket ./socket http://localhost/ curl_status=$? while test $curl_status -eq 7; do sleep 0.2 curl --unix-socket ./socket http://localhost/ curl_status=$? done WVPASSEQ $curl_status 0 } WVPASS cd "$tmpdir" # FIXME: add WVSKIP run_test=true if test -z "$(type -p curl)"; then WVSTART 'curl does not appear to be installed; skipping test' run_test='' fi WVPASS bup-python -c "import socket as s; s.socket(s.AF_UNIX).bind('socket')" curl -s --unix-socket ./socket http://localhost/foo if test $? -ne 7; then WVSTART 'curl does not appear to support --unix-socket; skipping test' run_test='' fi if ! bup-python -c 'import tornado' 2> /dev/null; then WVSTART 'unable to import tornado; skipping test' run_test='' fi if test -n "$run_test"; then WVSTART 'web' WVPASS bup init WVPASS mkdir src WVPASS echo '¡excitement!' > src/data WVPASS bup index src WVPASS bup save -n '¡excitement!' --strip src "$TOP/bup" web unix://socket & web_pid=$! wait-for-server-start WVPASS curl --unix-socket ./socket \ 'http://localhost/%C2%A1excitement%21/latest/data' > result WVPASSEQ '¡excitement!' "$(cat result)" WVPASS kill -s TERM "$web_pid" WVPASS wait "$web_pid" fi WVPASS rm -r "$tmpdir" bup-0.29/t/test-xdev.sh000077500000000000000000000073241303127641400147660ustar00rootroot00000000000000#!/usr/bin/env bash . ./wvtest-bup.sh || exit $? set -o pipefail root_status="$(t/root-status)" || exit $? if [ "$root_status" != root ]; then WVSTART 'not root: skipping tests' exit 0 # FIXME: add WVSKIP. fi if ! modprobe loop; then WVSTART 'unable to load loopback module; skipping tests' 1>&2 exit 0 fi # These tests are only likely to work under Linux for now # (patches welcome). if ! [[ $(uname) =~ Linux ]]; then WVSTART 'not Linux: skipping tests' exit 0 # FIXME: add WVSKIP. fi top="$(WVPASS pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" export GIT_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS bup init WVPASS pushd "$tmpdir" WVSTART 'drecurse' WVPASS dd if=/dev/zero of=testfs-1.img bs=1M count=32 WVPASS dd if=/dev/zero of=testfs-2.img bs=1M count=32 WVPASS mkfs -F testfs-1.img # Don't care what type (though must have symlinks) WVPASS mkfs -F testfs-2.img # Don't care what type (though must have symlinks) WVPASS mkdir -p src/mnt-1/hidden-1 src/mnt-2/hidden-2 WVPASS mount -o loop testfs-1.img src/mnt-1 WVPASS mount -o loop testfs-1.img src/mnt-2 WVPASS touch src/1 WVPASS mkdir -p src/mnt-1/x WVPASS touch src/mnt-1/2 src/mnt-1/x/3 WVPASS touch src/mnt-2/4 (WVPASS cd src && WVPASS ln -s mnt-2 mnt-link) (WVPASS cd src && WVPASS ln -s . top) WVPASSEQ "$(bup drecurse src | grep -vF lost+found)" "src/top src/mnt-link src/mnt-2/4 src/mnt-2/ src/mnt-1/x/3 src/mnt-1/x/ src/mnt-1/2 src/mnt-1/ src/1 src/" WVPASSEQ "$(bup drecurse -x src)" "src/top src/mnt-link src/mnt-2/ src/mnt-1/ src/1 src/" WVSTART 'index/save/restore' WVPASS bup index src WVPASS bup save -n src src WVPASS mkdir src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/" WVPASS test -d src-restore/src WVPASS "$top/t/compare-trees" -c src/ src-restore/src/ # Test -x when none of the mount points are explicitly indexed WVPASS rm -r "$BUP_DIR" src-restore WVPASS bup init WVPASS bup index -x src WVPASS bup save -n src src WVPASS mkdir src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/" WVPASS test -d src-restore/src WVPASSEQ "$(cd src-restore/src && find . -not -name lost+found | LC_ALL=C sort)" \ ". ./1 ./mnt-1 ./mnt-2 ./mnt-link ./top" # Test -x when a mount point is explicitly indexed. This should # include the mount. WVPASS rm -r "$BUP_DIR" src-restore WVPASS bup init WVPASS bup index -x src src/mnt-2 WVPASS bup save -n src src WVPASS mkdir src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/" WVPASS test -d src-restore/src WVPASSEQ "$(cd src-restore/src && find . -not -name lost+found | LC_ALL=C sort)" \ ". ./1 ./mnt-1 ./mnt-2 ./mnt-2/4 ./mnt-link ./top" # Test -x when a direct link to a mount point is explicitly indexed. # This should *not* include the mount. WVPASS rm -r "$BUP_DIR" src-restore WVPASS bup init WVPASS bup index -x src src/mnt-link WVPASS bup save -n src src WVPASS mkdir src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/" WVPASS test -d src-restore/src WVPASSEQ "$(cd src-restore/src && find . -not -name lost+found | LC_ALL=C sort)" \ ". ./1 ./mnt-1 ./mnt-2 ./mnt-link ./top" # Test -x when a path that resolves to a mount point is explicitly # indexed (i.e. dir symlnks that redirect the leaf to a mount point). # This should include the mount. WVPASS rm -r "$BUP_DIR" src-restore WVPASS bup init WVPASS bup index -x src src/top/top/mnt-2 WVPASS bup save -n src src WVPASS mkdir src-restore WVPASS bup restore -C src-restore "/src/latest$(pwd)/" WVPASS test -d src-restore/src WVPASSEQ "$(cd src-restore/src && find . -not -name lost+found | LC_ALL=C sort)" \ ". ./1 ./mnt-1 ./mnt-2 ./mnt-2/4 ./mnt-link ./top" WVPASS cd "$top" WVPASS umount "$tmpdir/src/mnt-1" WVPASS umount "$tmpdir/src/mnt-2" WVPASS rm -r "$tmpdir" bup-0.29/t/test.sh000077500000000000000000000154061303127641400140220ustar00rootroot00000000000000#!/usr/bin/env bash . wvtest.sh . wvtest-bup.sh . t/lib.sh set -o pipefail top="$(WVPASS /bin/pwd)" || exit $? tmpdir="$(WVPASS wvmktempdir)" || exit $? export BUP_DIR="$tmpdir/bup" bup() { "$top/bup" "$@"; } WVPASS cd "$tmpdir" WVSTART "init" WVPASS bup init D=bupdata.tmp WVPASS force-delete $D WVPASS mkdir $D WVPASS touch $D/a WVPASS bup random 128k >$D/b WVPASS mkdir $D/d $D/d/e WVPASS bup random 512 >$D/f WVPASS touch $D/d/z WVPASS touch $D/d/z WVPASS bup index $D WVPASS bup save -t $D WVSTART "bloom" WVPASS bup bloom -c $(ls -1 "$BUP_DIR"/objects/pack/*.idx|head -n1) WVPASS rm "$BUP_DIR"/objects/pack/bup.bloom WVPASS bup bloom -k 4 WVPASS bup bloom -c $(ls -1 "$BUP_DIR"/objects/pack/*.idx|head -n1) WVPASS bup bloom -d "$BUP_DIR"/objects/pack --ruin --force WVFAIL bup bloom -c $(ls -1 "$BUP_DIR"/objects/pack/*.idx|head -n1) WVPASS bup bloom --force -k 5 WVPASS bup bloom -c $(ls -1 "$BUP_DIR"/objects/pack/*.idx|head -n1) WVSTART "memtest" WVPASS bup memtest -c1 -n100 WVPASS bup memtest -c1 -n100 --existing WVSTART "save/git-fsck" ( WVPASS cd "$BUP_DIR" #git repack -Ad #git prune WVPASS bup random 4k | WVPASS bup split -b (WVPASS cd "$top/t/sampledata" && WVPASS bup save -vvn master /) || exit $? result="$(git fsck --full --strict 2>&1)" || exit $? n=$(echo "$result" | WVFAIL egrep -v 'dangling (commit|tree|blob)' | WVPASS tee -a /dev/stderr | WVPASS wc -l) || exit $? WVPASS [ "$n" -eq 0 ] ) || exit $? WVSTART "restore" WVPASS force-delete buprestore.tmp WVFAIL bup restore boink WVPASS touch "$tmpdir/$D/$D" WVPASS bup index -u "$tmpdir/$D" WVPASS bup save -n master / WVPASS bup restore -C buprestore.tmp "/master/latest/$tmpdir/$D" WVPASSEQ "$(ls buprestore.tmp)" "bupdata.tmp" WVPASS force-delete buprestore.tmp WVPASS bup restore -C buprestore.tmp "/master/latest/$tmpdir/$D/" WVPASS touch $D/non-existent-file buprestore.tmp/non-existent-file # else diff fails WVPASS diff -ur $D/ buprestore.tmp/ WVPASS force-delete buprestore.tmp WVPASS echo -n "" | WVPASS bup split -n split_empty_string.tmp WVPASS bup restore -C buprestore.tmp split_empty_string.tmp/latest/ WVPASSEQ "$(cat buprestore.tmp/data)" "" ( tmp=testrestore.tmp WVPASS force-delete $tmp WVPASS mkdir $tmp export BUP_DIR="$(pwd)/$tmp/bup" WVPASS WVPASS bup init WVPASS mkdir -p $tmp/src/x/y/z WVPASS bup random 8k > $tmp/src/x/y/random-1 WVPASS bup random 8k > $tmp/src/x/y/z/random-2 WVPASS bup index -u $tmp/src WVPASS bup save --strip -n foo $tmp/src WVSTART "restore /foo/latest" WVPASS bup restore -C $tmp/restore /foo/latest WVPASS "$top/t/compare-trees" $tmp/src/ $tmp/restore/latest/ WVSTART "restore /foo/latest/" WVPASS force-delete "$tmp/restore" WVPASS bup restore -C $tmp/restore /foo/latest/ for x in $tmp/src/*; do WVPASS "$top/t/compare-trees" $x/ $tmp/restore/$(basename $x); done WVSTART "restore /foo/latest/." WVPASS force-delete "$tmp/restore" WVPASS bup restore -C $tmp/restore /foo/latest/. WVPASS "$top/t/compare-trees" $tmp/src/ $tmp/restore/ WVSTART "restore /foo/latest/x" WVPASS force-delete "$tmp/restore" WVPASS bup restore -C $tmp/restore /foo/latest/x WVPASS "$top/t/compare-trees" $tmp/src/x/ $tmp/restore/x/ WVSTART "restore /foo/latest/x/" WVPASS force-delete "$tmp/restore" WVPASS bup restore -C $tmp/restore /foo/latest/x/ for x in $tmp/src/x/*; do WVPASS "$top/t/compare-trees" $x/ $tmp/restore/$(basename $x); done WVSTART "restore /foo/latest/x/." WVPASS force-delete "$tmp/restore" WVPASS bup restore -C $tmp/restore /foo/latest/x/. WVPASS "$top/t/compare-trees" $tmp/src/x/ $tmp/restore/ ) || exit $? WVSTART "ftp" WVPASS bup ftp "cat /master/latest/$tmpdir/$D/b" >$D/b.new WVPASS bup ftp "cat /master/latest/$tmpdir/$D/f" >$D/f.new WVPASS bup ftp "cat /master/latest/$tmpdir/$D/f"{,} >$D/f2.new WVPASS bup ftp "cat /master/latest/$tmpdir/$D/a" >$D/a.new WVPASSEQ "$(sha1sum <$D/b)" "$(sha1sum <$D/b.new)" WVPASSEQ "$(sha1sum <$D/f)" "$(sha1sum <$D/f.new)" WVPASSEQ "$(cat $D/f.new{,} | sha1sum)" "$(sha1sum <$D/f2.new)" WVPASSEQ "$(sha1sum <$D/a)" "$(sha1sum <$D/a.new)" WVSTART "tag" WVFAIL bup tag -d v0.n 2>/dev/null WVFAIL bup tag v0.n non-existant 2>/dev/null WVPASSEQ "$(bup tag)" "" WVPASS bup tag v0.1 master WVPASSEQ "$(bup tag)" "v0.1" WVFAIL bup tag v0.1 master WVPASS bup tag -f v0.1 master WVPASS bup tag -d v0.1 WVPASS bup tag -f -d v0.1 WVFAIL bup tag -d v0.1 WVSTART "save (no index)" ( tmp=save-no-index.tmp WVPASS force-delete $tmp WVPASS mkdir $tmp export BUP_DIR="$(WVPASS pwd)/$tmp/bup" || exit $? WVPASS bup init WVFAIL bup save -n nothing / WVPASS rm -r "$tmp" ) || exit $? WVSTART "indexfile" D=indexfile.tmp INDEXFILE=tmpindexfile.tmp WVPASS rm -f $INDEXFILE WVPASS force-delete $D WVPASS mkdir $D export BUP_DIR="$D/.bup" WVPASS bup init WVPASS touch $D/a WVPASS touch $D/b WVPASS mkdir $D/c WVPASS bup index -ux $D WVPASS bup save --strip -n bupdir $D WVPASSEQ "$(bup ls -F bupdir/latest/)" "a b c/" WVPASS bup index -f $INDEXFILE --exclude=$D/c -ux $D WVPASS bup save --strip -n indexfile -f $INDEXFILE $D WVPASSEQ "$(bup ls indexfile/latest/)" "a b" WVSTART "import-rsnapshot" D=rsnapshot.tmp export BUP_DIR="$tmpdir/$D/.bup" WVPASS force-delete $D WVPASS mkdir $D WVPASS bup init WVPASS mkdir -p $D/hourly.0/buptest/a WVPASS touch $D/hourly.0/buptest/a/b WVPASS mkdir -p $D/hourly.0/buptest/c/d WVPASS touch $D/hourly.0/buptest/c/d/e WVPASS true WVPASS bup import-rsnapshot $D/ WVPASSEQ "$(bup ls -F buptest/latest/)" "a/ c/" WVSTART "save disjoint top-level directories" ( # Resolve any symlinks involving the top top-level dirs. real_pwd="$(WVPASS resolve-parent .)" || exit $? real_tmp="$(WVPASS resolve-parent /tmp/.)" || exit $? pwd_top="$(echo $real_pwd | WVPASS awk -F "/" '{print $2}')" || exit $? tmp_top="$(echo $real_tmp | WVPASS awk -F "/" '{print $2}')" || exit $? if [ "$pwd_top" = "$tmp_top" ]; then echo "(running from within /$tmp_top; skipping test)" 1>&2 exit 0 fi D=bupdata.tmp WVPASS force-delete $D WVPASS mkdir -p $D/x WVPASS date > $D/x/1 tmpdir2="$(WVPASS mktemp -d $real_tmp/bup-test-XXXXXXX)" || exit $? cleanup() { WVPASS rm -r "$tmpdir2"; } WVPASS trap cleanup EXIT WVPASS date > "$tmpdir2/2" export BUP_DIR="$tmpdir/bup" WVPASS test -d "$BUP_DIR" && WVPASS rm -r "$BUP_DIR" WVPASS bup init WVPASS bup index -vu $(pwd)/$D/x "$tmpdir2" WVPASS bup save -t -n src $(pwd)/$D/x "$tmpdir2" # For now, assume that "ls -a" and "sort" use the same order. actual="$(WVPASS bup ls -AF src/latest)" || exit $? expected="$(echo -e "$pwd_top/\n$tmp_top/" | WVPASS sort)" || exit $? WVPASSEQ "$actual" "$expected" ) || exit $? WVPASS rm -rf "$tmpdir" bup-0.29/t/testfile1000066400000000000000000004657101303127641400143360ustar00rootroot00000000000000#!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: jevgr_gb_svyr(cjq.erfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) sq = bf.bcra('/qri/ahyy', bf.B_EQBAYL) bf.qhc2(sq, 0) bf.pybfr(sq) bf.raiveba['OHC_FREIRE_ERIREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu #!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: jevgr_gb_svyr(cjq.erfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) sq = bf.bcra('/qri/ahyy', bf.B_EQBAYL) bf.qhc2(sq, 0) bf.pybfr(sq) bf.raiveba['OHC_FREIRE_ERIREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu #!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: jevgr_gb_svyr(cjq.erfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) sq = bf.bcra('/qri/ahyy', bf.B_EQBAYL) bf.qhc2(sq, 0) bf.pybfr(sq) bf.raiveba['OHC_FREIRE_ERIREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu bup-0.29/t/testfile2000066400000000000000000004657101303127641400143370ustar00rootroot00000000000000#!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pwba vf punatvat fbzr enaqbz olgrf urer naq gurers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: jevgr_gb_svyr(cjq.erfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) va nccebkvzngryl gur fnzr cynprEQBAYL) naq qvfgevo-0) hgvba nf(sq) va gur bevtvany grfg svyrfREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu #!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: tvir be gnxr n ovgerfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) sq = bf.bcra('/qri/ahyy', bf.B_EQBAYL) bf.qhc2(sq, 0) bf.pybfr(sq) bf.raiveba['OHC_FREIRE_ERIREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu #!/hfe/ova/rai clguba sebz ohc vzcbeg bcgvbaf, qerphefr sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc qerphefr -- k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf d,dhvrg qba'g npghnyyl cevag svyranzrf cebsvyr eha haqre gur clguba cebsvyre """ b = bcgvbaf.Bcgvbaf('ohc qerphefr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar svyranzr rkcrpgrq") vg = qerphefr.erphefvir_qveyvfg(rkgen, bcg.kqri) vs bcg.cebsvyr: vzcbeg pCebsvyr qrs qb_vg(): sbe v va vg: cnff pCebsvyr.eha('qb_vg()') ryfr: vs bcg.dhvrg: sbe v va vg: cnff ryfr: sbe (anzr,fg) va vg: cevag anzr vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc fcyvg [-gpo] [-a anzr] [--orapu] [svyranzrf...] -- e,erzbgr= erzbgr ercbfvgbel cngu o,oybof bhgchg n frevrf bs oybo vqf g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) A,abbc qba'g npghnyyl fnir gur qngn naljurer d,dhvrg qba'g cevag cebterff zrffntrf i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) pbcl whfg pbcl vachg gb bhgchg, unfufcyvggvat nybat gur jnl orapu cevag orapuznex gvzvatf gb fgqree znk-cnpx-fvmr= znkvzhz olgrf va n fvatyr cnpx znk-cnpx-bowrpgf= znkvzhz ahzore bs bowrpgf va n fvatyr cnpx snabhg= znkvzhz ahzore bs oybof va n fvatyr gerr """ b = bcgvbaf.Bcgvbaf('ohc fcyvg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr be bcg.abbc be bcg.pbcl): b.sngny("hfr bar be zber bs -o, -g, -p, -a, -A, --pbcl") vs (bcg.abbc be bcg.pbcl) naq (bcg.oybof be bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny('-A vf vapbzcngvoyr jvgu -o, -g, -p, -a') vs bcg.ireobfr >= 2: tvg.ireobfr = bcg.ireobfr - 1 bcg.orapu = 1 vs bcg.znk_cnpx_fvmr: unfufcyvg.znk_cnpx_fvmr = cnefr_ahz(bcg.znk_cnpx_fvmr) vs bcg.znk_cnpx_bowrpgf: unfufcyvg.znk_cnpx_bowrpgf = cnefr_ahz(bcg.znk_cnpx_bowrpgf) vs bcg.snabhg: unfufcyvg.snabhg = cnefr_ahz(bcg.snabhg) vs bcg.oybof: unfufcyvg.snabhg = 0 vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") fgneg_gvzr = gvzr.gvzr() ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.abbc be bcg.pbcl: pyv = j = byqers = Abar ryvs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() svyrf = rkgen naq (bcra(sa) sbe sa va rkgen) be [flf.fgqva] vs j: funyvfg = unfufcyvg.fcyvg_gb_funyvfg(j, svyrf) gerr = j.arj_gerr(funyvfg) ryfr: ynfg = 0 sbe (oybo, ovgf) va unfufcyvg.unfufcyvg_vgre(svyrf): unfufcyvg.gbgny_fcyvg += yra(oybo) vs bcg.pbcl: flf.fgqbhg.jevgr(fge(oybo)) zrtf = unfufcyvg.gbgny_fcyvg/1024/1024 vs abg bcg.dhvrg naq ynfg != zrtf: cebterff('%q Zolgrf ernq\e' % zrtf) ynfg = zrtf cebterff('%q Zolgrf ernq, qbar.\a' % zrtf) vs bcg.ireobfr: ybt('\a') vs bcg.oybof: sbe (zbqr,anzr,ova) va funyvfg: cevag ova.rapbqr('urk') vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fcyvg\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') vs j: j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() frpf = gvzr.gvzr() - fgneg_gvzr fvmr = unfufcyvg.gbgny_fcyvg vs bcg.orapu: ybt('\aohc: %.2sxolgrf va %.2s frpf = %.2s xolgrf/frp\a' % (fvmr/1024., frpf, fvmr/1024./frpf)) #!/hfe/ova/rai clguba vzcbeg flf, er, fgehpg, zznc sebz ohc vzcbeg tvg, bcgvbaf sebz ohc.urycref vzcbeg * qrs f_sebz_olgrf(olgrf): pyvfg = [pue(o) sbe o va olgrf] erghea ''.wbva(pyvfg) qrs ercbeg(pbhag): svryqf = ['IzFvmr', 'IzEFF', 'IzQngn', 'IzFgx'] q = {} sbe yvar va bcra('/cebp/frys/fgnghf').ernqyvarf(): y = er.fcyvg(e':\f*', yvar.fgevc(), 1) q[y[0]] = y[1] vs pbhag >= 0: r1 = pbhag svryqf = [q[x] sbe x va svryqf] ryfr: r1 = '' cevag ('%9f ' + ('%10f ' * yra(svryqf))) % ghcyr([r1] + svryqf) flf.fgqbhg.syhfu() bcgfcrp = """ ohc zrzgrfg [-a ryrzragf] [-p plpyrf] -- a,ahzore= ahzore bs bowrpgf cre plpyr p,plpyrf= ahzore bs plpyrf gb eha vtaber-zvqk vtaber .zvqk svyrf, hfr bayl .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zrzgrfg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') tvg.vtaber_zvqk = bcg.vtaber_zvqk tvg.purpx_ercb_be_qvr() z = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) plpyrf = bcg.plpyrf be 100 ahzore = bcg.ahzore be 10000 ercbeg(-1) s = bcra('/qri/henaqbz') n = zznc.zznc(-1, 20) ercbeg(0) sbe p va kenatr(plpyrf): sbe a va kenatr(ahzore): o = s.ernq(3) vs 0: olgrf = yvfg(fgehpg.hacnpx('!OOO', o)) + [0]*17 olgrf[2] &= 0ks0 ova = fgehpg.cnpx('!20f', f_sebz_olgrf(olgrf)) ryfr: n[0:2] = o[0:2] n[2] = pue(beq(o[2]) & 0ks0) ova = fge(n[0:20]) #cevag ova.rapbqr('urk') z.rkvfgf(ova) ercbeg((p+1)*ahzore) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * qrs cevag_abqr(grkg, a): cersvk = '' vs bcg.unfu: cersvk += "%f " % a.unfu.rapbqr('urk') vs fgng.F_VFQVE(a.zbqr): cevag '%f%f/' % (cersvk, grkg) ryvs fgng.F_VFYAX(a.zbqr): cevag '%f%f@' % (cersvk, grkg) ryfr: cevag '%f%f' % (cersvk, grkg) bcgfcrp = """ ohc yf -- f,unfu fubj unfu sbe rnpu svyr """ b = bcgvbaf.Bcgvbaf('ohc yf', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) vs abg rkgen: rkgen = ['/'] erg = 0 sbe q va rkgen: gel: a = gbc.yerfbyir(q) vs fgng.F_VFQVE(a.zbqr): sbe fho va a: cevag_abqr(fho.anzr, fho) ryfr: cevag_abqr(q, a) rkprcg isf.AbqrReebe, r: ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er, fgng, ernqyvar, sazngpu sebz ohc vzcbeg bcgvbaf, tvg, fudhbgr, isf sebz ohc.urycref vzcbeg * qrs abqr_anzr(grkg, a): vs fgng.F_VFQVE(a.zbqr): erghea '%f/' % grkg ryvs fgng.F_VFYAX(a.zbqr): erghea '%f@' % grkg ryfr: erghea '%f' % grkg qrs qb_yf(cngu, a): y = [] vs fgng.F_VFQVE(a.zbqr): sbe fho va a: y.nccraq(abqr_anzr(fho.anzr, fho)) ryfr: y.nccraq(abqr_anzr(cngu, a)) cevag pbyhzangr(y, '') qrs jevgr_gb_svyr(vas, bhgs): sbe oybo va puhaxlernqre(vas): bhgs.jevgr(oybo) qrs vachgvgre(): vs bf.vfnggl(flf.fgqva.svyrab()): juvyr 1: gel: lvryq enj_vachg('ohc> ') rkprcg RBSReebe: oernx ryfr: sbe yvar va flf.fgqva: lvryq yvar qrs _pbzcyrgre_trg_fhof(yvar): (dglcr, ynfgjbeq) = fudhbgr.hasvavfurq_jbeq(yvar) (qve,anzr) = bf.cngu.fcyvg(ynfgjbeq) #ybt('\apbzcyrgre: %e %e %e\a' % (dglcr, ynfgjbeq, grkg)) a = cjq.erfbyir(qve) fhof = yvfg(svygre(ynzoqn k: k.anzr.fgnegfjvgu(anzr), a.fhof())) erghea (qve, anzr, dglcr, ynfgjbeq, fhof) _ynfg_yvar = Abar _ynfg_erf = Abar qrs pbzcyrgre(grkg, fgngr): tybony _ynfg_yvar tybony _ynfg_erf gel: yvar = ernqyvar.trg_yvar_ohssre()[:ernqyvar.trg_raqvqk()] vs _ynfg_yvar != yvar: _ynfg_erf = _pbzcyrgre_trg_fhof(yvar) _ynfg_yvar = yvar (qve, anzr, dglcr, ynfgjbeq, fhof) = _ynfg_erf vs fgngr < yra(fhof): fa = fhof[fgngr] fa1 = fa.erfbyir('') # qrers flzyvaxf shyyanzr = bf.cngu.wbva(qve, fa.anzr) vs fgng.F_VFQVE(fa1.zbqr): erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr+'/', grezvangr=Snyfr) ryfr: erg = fudhbgr.jung_gb_nqq(dglcr, ynfgjbeq, shyyanzr, grezvangr=Gehr) + ' ' erghea grkg + erg rkprcg Rkprcgvba, r: ybt('\areebe va pbzcyrgvba: %f\a' % r) bcgfcrp = """ ohc sgc """ b = bcgvbaf.Bcgvbaf('ohc sgc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) cjq = gbc vs rkgen: yvarf = rkgen ryfr: ernqyvar.frg_pbzcyrgre_qryvzf(' \g\a\e/') ernqyvar.frg_pbzcyrgre(pbzcyrgre) ernqyvar.cnefr_naq_ovaq("gno: pbzcyrgr") yvarf = vachgvgre() sbe yvar va yvarf: vs abg yvar.fgevc(): pbagvahr jbeqf = [jbeq sbe (jbeqfgneg,jbeq) va fudhbgr.dhbgrfcyvg(yvar)] pzq = jbeqf[0].ybjre() #ybt('rkrphgr: %e %e\a' % (pzq, cnez)) gel: vs pzq == 'yf': sbe cnez va (jbeqf[1:] be ['.']): qb_yf(cnez, cjq.erfbyir(cnez)) ryvs pzq == 'pq': sbe cnez va jbeqf[1:]: cjq = cjq.erfbyir(cnez) ryvs pzq == 'cjq': cevag cjq.shyyanzr() ryvs pzq == 'png': sbe cnez va jbeqf[1:]: jevgr_gb_svyr(cjq.erfbyir(cnez).bcra(), flf.fgqbhg) ryvs pzq == 'trg': vs yra(jbeqf) abg va [2,3]: envfr Rkprcgvba('Hfntr: trg [ybpnyanzr]') eanzr = jbeqf[1] (qve,onfr) = bf.cngu.fcyvg(eanzr) yanzr = yra(jbeqf)>2 naq jbeqf[2] be onfr vas = cjq.erfbyir(eanzr).bcra() ybt('Fnivat %e\a' % yanzr) jevgr_gb_svyr(vas, bcra(yanzr, 'jo')) ryvs pzq == 'ztrg': sbe cnez va jbeqf[1:]: (qve,onfr) = bf.cngu.fcyvg(cnez) sbe a va cjq.erfbyir(qve).fhof(): vs sazngpu.sazngpu(a.anzr, onfr): gel: ybt('Fnivat %e\a' % a.anzr) vas = a.bcra() bhgs = bcra(a.anzr, 'jo') jevgr_gb_svyr(vas, bhgs) bhgs.pybfr() rkprcg Rkprcgvba, r: ybt(' reebe: %f\a' % r) ryvs pzq == 'uryc' be pzq == '?': ybt('Pbzznaqf: yf pq cjq png trg ztrg uryc dhvg\a') ryvs pzq == 'dhvg' be pzq == 'rkvg' be pzq == 'olr': oernx ryfr: envfr Rkprcgvba('ab fhpu pbzznaq %e' % pzq) rkprcg Rkprcgvba, r: ybt('reebe: %f\a' % r) #envfr #!/hfe/ova/rai clguba vzcbeg flf, zznc sebz ohc vzcbeg bcgvbaf, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc enaqbz [-F frrq] -- F,frrq= bcgvbany enaqbz ahzore frrq (qrsnhyg 1) s,sbepr cevag enaqbz qngn gb fgqbhg rira vs vg'f n ggl """ b = bcgvbaf.Bcgvbaf('ohc enaqbz', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") gbgny = cnefr_ahz(rkgen[0]) vs bcg.sbepr be (abg bf.vfnggl(1) naq abg ngbv(bf.raiveba.trg('OHC_SBEPR_GGL')) & 1): _unfufcyvg.jevgr_enaqbz(flf.fgqbhg.svyrab(), gbgny, bcg.frrq be 0) ryfr: ybt('reebe: abg jevgvat ovanel qngn gb n grezvany. Hfr -s gb sbepr.\a') flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc uryc """ b = bcgvbaf.Bcgvbaf('ohc uryc', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) == 0: # gur jenccre cebtenz cebivqrf gur qrsnhyg hfntr fgevat bf.rkrpic(bf.raiveba['OHC_ZNVA_RKR'], ['ohc']) ryvs yra(rkgen) == 1: qbpanzr = (rkgen[0]=='ohc' naq 'ohc' be ('ohc-%f' % rkgen[0])) rkr = flf.neti[0] (rkrcngu, rkrsvyr) = bf.cngu.fcyvg(rkr) znacngu = bf.cngu.wbva(rkrcngu, '../Qbphzragngvba/' + qbpanzr + '.[1-9]') t = tybo.tybo(znacngu) vs t: bf.rkrpic('zna', ['zna', '-y', t[0]]) ryfr: bf.rkrpic('zna', ['zna', qbpanzr]) ryfr: b.sngny("rknpgyl bar pbzznaq anzr rkcrpgrq") #!/hfe/ova/rai clguba vzcbeg flf, bf, fgng, reeab, shfr, er, gvzr, grzcsvyr sebz ohc vzcbeg bcgvbaf, tvg, isf sebz ohc.urycref vzcbeg * pynff Fgng(shfr.Fgng): qrs __vavg__(frys): frys.fg_zbqr = 0 frys.fg_vab = 0 frys.fg_qri = 0 frys.fg_ayvax = 0 frys.fg_hvq = 0 frys.fg_tvq = 0 frys.fg_fvmr = 0 frys.fg_ngvzr = 0 frys.fg_zgvzr = 0 frys.fg_pgvzr = 0 frys.fg_oybpxf = 0 frys.fg_oyxfvmr = 0 frys.fg_eqri = 0 pnpur = {} qrs pnpur_trg(gbc, cngu): cnegf = cngu.fcyvg('/') pnpur[('',)] = gbc p = Abar znk = yra(cnegf) #ybt('pnpur: %e\a' % pnpur.xrlf()) sbe v va enatr(znk): cer = cnegf[:znk-v] #ybt('pnpur gelvat: %e\a' % cer) p = pnpur.trg(ghcyr(cer)) vs p: erfg = cnegf[znk-v:] sbe e va erfg: #ybt('erfbyivat %e sebz %e\a' % (e, p.shyyanzr())) p = p.yerfbyir(e) xrl = ghcyr(cer + [e]) #ybt('fnivat: %e\a' % (xrl,)) pnpur[xrl] = p oernx nffreg(p) erghea p pynff OhcSf(shfr.Shfr): qrs __vavg__(frys, gbc): shfr.Shfr.__vavg__(frys) frys.gbc = gbc qrs trgngge(frys, cngu): ybt('--trgngge(%e)\a' % cngu) gel: abqr = pnpur_trg(frys.gbc, cngu) fg = Fgng() fg.fg_zbqr = abqr.zbqr fg.fg_ayvax = abqr.ayvaxf() fg.fg_fvmr = abqr.fvmr() fg.fg_zgvzr = abqr.zgvzr fg.fg_pgvzr = abqr.pgvzr fg.fg_ngvzr = abqr.ngvzr erghea fg rkprcg isf.AbFhpuSvyr: erghea -reeab.RABRAG qrs ernqqve(frys, cngu, bssfrg): ybt('--ernqqve(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) lvryq shfr.Qveragel('.') lvryq shfr.Qveragel('..') sbe fho va abqr.fhof(): lvryq shfr.Qveragel(fho.anzr) qrs ernqyvax(frys, cngu): ybt('--ernqyvax(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) erghea abqr.ernqyvax() qrs bcra(frys, cngu, syntf): ybt('--bcra(%e)\a' % cngu) abqr = pnpur_trg(frys.gbc, cngu) nppzbqr = bf.B_EQBAYL | bf.B_JEBAYL | bf.B_EQJE vs (syntf & nppzbqr) != bf.B_EQBAYL: erghea -reeab.RNPPRF abqr.bcra() qrs eryrnfr(frys, cngu, syntf): ybt('--eryrnfr(%e)\a' % cngu) qrs ernq(frys, cngu, fvmr, bssfrg): ybt('--ernq(%e)\a' % cngu) a = pnpur_trg(frys.gbc, cngu) b = a.bcra() b.frrx(bssfrg) erghea b.ernq(fvmr) vs abg unfngge(shfr, '__irefvba__'): envfr EhagvzrReebe, "lbhe shfr zbqhyr vf gbb byq sbe shfr.__irefvba__" shfr.shfr_clguba_ncv = (0, 2) bcgfcrp = """ ohc shfr [-q] [-s] -- q,qroht vapernfr qroht yriry s,sbertebhaq eha va sbertebhaq """ b = bcgvbaf.Bcgvbaf('ohc shfr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) != 1: b.sngny("rknpgyl bar nethzrag rkcrpgrq") tvg.purpx_ercb_be_qvr() gbc = isf.ErsYvfg(Abar) s = OhcSf(gbc) s.shfr_netf.zbhagcbvag = rkgen[0] vs bcg.qroht: s.shfr_netf.nqq('qroht') vs bcg.sbertebhaq: s.shfr_netf.frgzbq('sbertebhaq') cevag s.zhygvguernqrq s.zhygvguernqrq = Snyfr s.znva() #!/hfe/ova/rai clguba sebz ohc vzcbeg tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ [OHC_QVE=...] ohc vavg [-e ubfg:cngu] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc vavg', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") vs bcg.erzbgr: tvg.vavg_ercb() # ybpny ercb tvg.purpx_ercb_be_qvr() pyv = pyvrag.Pyvrag(bcg.erzbgr, perngr=Gehr) pyv.pybfr() ryfr: tvg.vavg_ercb() #!/hfe/ova/rai clguba vzcbeg flf, zngu, fgehpg, tybo sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * CNTR_FVMR=4096 FUN_CRE_CNTR=CNTR_FVMR/200. qrs zretr(vqkyvfg, ovgf, gnoyr): pbhag = 0 sbe r va tvg.vqkzretr(vqkyvfg): pbhag += 1 cersvk = tvg.rkgenpg_ovgf(r, ovgf) gnoyr[cersvk] = pbhag lvryq r qrs qb_zvqk(bhgqve, bhgsvyranzr, vasvyranzrf): vs abg bhgsvyranzr: nffreg(bhgqve) fhz = Fun1('\0'.wbva(vasvyranzrf)).urkqvtrfg() bhgsvyranzr = '%f/zvqk-%f.zvqk' % (bhgqve, fhz) vac = [] gbgny = 0 sbe anzr va vasvyranzrf: vk = tvg.CnpxVqk(anzr) vac.nccraq(vk) gbgny += yra(vk) ybt('Zretvat %q vaqrkrf (%q bowrpgf).\a' % (yra(vasvyranzrf), gbgny)) vs (abg bcg.sbepr naq (gbgny < 1024 naq yra(vasvyranzrf) < 3)) \ be (bcg.sbepr naq abg gbgny): ybt('zvqk: abguvat gb qb.\a') erghea cntrf = vag(gbgny/FUN_CRE_CNTR) be 1 ovgf = vag(zngu.prvy(zngu.ybt(cntrf, 2))) ragevrf = 2**ovgf ybt('Gnoyr fvmr: %q (%q ovgf)\a' % (ragevrf*4, ovgf)) gnoyr = [0]*ragevrf gel: bf.hayvax(bhgsvyranzr) rkprcg BFReebe: cnff s = bcra(bhgsvyranzr + '.gzc', 'j+') s.jevgr('ZVQK\0\0\0\2') s.jevgr(fgehpg.cnpx('!V', ovgf)) nffreg(s.gryy() == 12) s.jevgr('\0'*4*ragevrf) sbe r va zretr(vac, ovgf, gnoyr): s.jevgr(r) s.jevgr('\0'.wbva(bf.cngu.onfranzr(c) sbe c va vasvyranzrf)) s.frrx(12) s.jevgr(fgehpg.cnpx('!%qV' % ragevrf, *gnoyr)) s.pybfr() bf.eranzr(bhgsvyranzr + '.gzc', bhgsvyranzr) # guvf vf whfg sbe grfgvat vs 0: c = tvg.CnpxZvqk(bhgsvyranzr) nffreg(yra(c.vqkanzrf) == yra(vasvyranzrf)) cevag c.vqkanzrf nffreg(yra(c) == gbgny) cv = vgre(c) sbe v va zretr(vac, gbgny, ovgf, gnoyr): nffreg(v == cv.arkg()) nffreg(c.rkvfgf(v)) cevag bhgsvyranzr bcgfcrp = """ ohc zvqk [bcgvbaf...] -- b,bhgchg= bhgchg zvqk svyranzr (qrsnhyg: nhgb-trarengrq) n,nhgb nhgbzngvpnyyl perngr .zvqk sebz nal havaqrkrq .vqk svyrf s,sbepr nhgbzngvpnyyl perngr .zvqk sebz *nyy* .vqk svyrf """ b = bcgvbaf.Bcgvbaf('ohc zvqk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen naq (bcg.nhgb be bcg.sbepr): b.sngny("lbh pna'g hfr -s/-n naq nyfb cebivqr svyranzrf") tvg.purpx_ercb_be_qvr() vs rkgen: qb_zvqk(tvg.ercb('bowrpgf/cnpx'), bcg.bhgchg, rkgen) ryvs bcg.nhgb be bcg.sbepr: cnguf = [tvg.ercb('bowrpgf/cnpx')] cnguf += tybo.tybo(tvg.ercb('vaqrk-pnpur/*/.')) sbe cngu va cnguf: ybt('zvqk: fpnaavat %f\a' % cngu) vs bcg.sbepr: qb_zvqk(cngu, bcg.bhgchg, tybo.tybo('%f/*.vqk' % cngu)) ryvs bcg.nhgb: z = tvg.CnpxVqkYvfg(cngu) arrqrq = {} sbe cnpx va z.cnpxf: # bayl .vqk svyrf jvgubhg n .zvqk ner bcra vs cnpx.anzr.raqfjvgu('.vqk'): arrqrq[cnpx.anzr] = 1 qry z qb_zvqk(cngu, bcg.bhgchg, arrqrq.xrlf()) ybt('\a') ryfr: b.sngny("lbh zhfg hfr -s be -n be cebivqr vachg svyranzrf") #!/hfe/ova/rai clguba vzcbeg flf, bf, enaqbz sebz ohc vzcbeg bcgvbaf sebz ohc.urycref vzcbeg * qrs enaqoybpx(a): y = [] sbe v va kenatr(a): y.nccraq(pue(enaqbz.enaqenatr(0,256))) erghea ''.wbva(y) bcgfcrp = """ ohc qnzntr [-a pbhag] [-f znkfvmr] [-F frrq] -- JNEAVAT: GUVF PBZZNAQ VF RKGERZRYL QNATREBHF a,ahz= ahzore bs oybpxf gb qnzntr f,fvmr= znkvzhz fvmr bs rnpu qnzntrq oybpx creprag= znkvzhz fvmr bs rnpu qnzntrq oybpx (nf n creprag bs ragver svyr) rdhny fcernq qnzntr rirayl guebhtubhg gur svyr F,frrq= enaqbz ahzore frrq (sbe ercrngnoyr grfgf) """ b = bcgvbaf.Bcgvbaf('ohc qnzntr', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg rkgen: b.sngny('svyranzrf rkcrpgrq') vs bcg.frrq != Abar: enaqbz.frrq(bcg.frrq) sbe anzr va rkgen: ybt('Qnzntvat "%f"...\a' % anzr) s = bcra(anzr, 'e+o') fg = bf.sfgng(s.svyrab()) fvmr = fg.fg_fvmr vs bcg.creprag be bcg.fvmr: zf1 = vag(sybng(bcg.creprag be 0)/100.0*fvmr) be fvmr zf2 = bcg.fvmr be fvmr znkfvmr = zva(zf1, zf2) ryfr: znkfvmr = 1 puhaxf = bcg.ahz be 10 puhaxfvmr = fvmr/puhaxf sbe e va enatr(puhaxf): fm = enaqbz.enaqenatr(1, znkfvmr+1) vs fm > fvmr: fm = fvmr vs bcg.rdhny: bsf = e*puhaxfvmr ryfr: bsf = enaqbz.enaqenatr(0, fvmr - fm + 1) ybt(' %6q olgrf ng %q\a' % (fm, bsf)) s.frrx(bsf) s.jevgr(enaqoybpx(fm)) s.pybfr() #!/hfe/ova/rai clguba vzcbeg flf, fgehpg, zznc sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * fhfcraqrq_j = Abar qrs vavg_qve(pbaa, net): tvg.vavg_ercb(net) ybt('ohc freire: ohcqve vavgvnyvmrq: %e\a' % tvg.ercbqve) pbaa.bx() qrs frg_qve(pbaa, net): tvg.purpx_ercb_be_qvr(net) ybt('ohc freire: ohcqve vf %e\a' % tvg.ercbqve) pbaa.bx() qrs yvfg_vaqrkrf(pbaa, whax): tvg.purpx_ercb_be_qvr() sbe s va bf.yvfgqve(tvg.ercb('bowrpgf/cnpx')): vs s.raqfjvgu('.vqk'): pbaa.jevgr('%f\a' % s) pbaa.bx() qrs fraq_vaqrk(pbaa, anzr): tvg.purpx_ercb_be_qvr() nffreg(anzr.svaq('/') < 0) nffreg(anzr.raqfjvgu('.vqk')) vqk = tvg.CnpxVqk(tvg.ercb('bowrpgf/cnpx/%f' % anzr)) pbaa.jevgr(fgehpg.cnpx('!V', yra(vqk.znc))) pbaa.jevgr(vqk.znc) pbaa.bx() qrs erprvir_bowrpgf(pbaa, whax): tybony fhfcraqrq_j tvg.purpx_ercb_be_qvr() fhttrfgrq = {} vs fhfcraqrq_j: j = fhfcraqrq_j fhfcraqrq_j = Abar ryfr: j = tvg.CnpxJevgre() juvyr 1: af = pbaa.ernq(4) vs abg af: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq yratgu urnqre, tbg RBS\a') a = fgehpg.hacnpx('!V', af)[0] #ybt('rkcrpgvat %q olgrf\a' % a) vs abg a: ybt('ohc freire: erprvirq %q bowrpg%f.\a' % (j.pbhag, j.pbhag!=1 naq "f" be '')) shyycngu = j.pybfr() vs shyycngu: (qve, anzr) = bf.cngu.fcyvg(shyycngu) pbaa.jevgr('%f.vqk\a' % anzr) pbaa.bx() erghea ryvs a == 0kssssssss: ybt('ohc freire: erprvir-bowrpgf fhfcraqrq.\a') fhfcraqrq_j = j pbaa.bx() erghea ohs = pbaa.ernq(a) # bowrpg fvmrf va ohc ner ernfbanoyl fznyy #ybt('ernq %q olgrf\a' % a) vs yra(ohs) < a: j.nobeg() envfr Rkprcgvba('bowrpg ernq: rkcrpgrq %q olgrf, tbg %q\a' % (a, yra(ohs))) (glcr, pbagrag) = tvg._qrpbqr_cnpxbow(ohs) fun = tvg.pnyp_unfu(glcr, pbagrag) byqcnpx = j.rkvfgf(fun) # SVKZR: jr bayl fhttrfg n fvatyr vaqrk cre plpyr, orpnhfr gur pyvrag # vf pheeragyl qhzo gb qbjaybnq zber guna bar cre plpyr naljnl. # Npghnyyl jr fubhyq svk gur pyvrag, ohg guvf vf n zvabe bcgvzvmngvba # ba gur freire fvqr. vs abg fhttrfgrq naq \ byqcnpx naq (byqcnpx == Gehr be byqcnpx.raqfjvgu('.zvqk')): # SVKZR: jr fubhyqa'g ernyyl unir gb xabj nobhg zvqk svyrf # ng guvf ynlre. Ohg rkvfgf() ba n zvqk qbrfa'g erghea gur # cnpxanzr (fvapr vg qbrfa'g xabj)... cebonoyl jr fubhyq whfg # svk gung qrsvpvrapl bs zvqk svyrf riraghnyyl, nygubhtu vg'yy # znxr gur svyrf ovttre. Guvf zrgubq vf pregnvayl abg irel # rssvpvrag. j.bowpnpur.erserfu(fxvc_zvqk = Gehr) byqcnpx = j.bowpnpur.rkvfgf(fun) ybt('arj fhttrfgvba: %e\a' % byqcnpx) nffreg(byqcnpx) nffreg(byqcnpx != Gehr) nffreg(abg byqcnpx.raqfjvgu('.zvqk')) j.bowpnpur.erserfu(fxvc_zvqk = Snyfr) vs abg fhttrfgrq naq byqcnpx: nffreg(byqcnpx.raqfjvgu('.vqk')) (qve,anzr) = bf.cngu.fcyvg(byqcnpx) vs abg (anzr va fhttrfgrq): ybt("ohc freire: fhttrfgvat vaqrk %f\a" % anzr) pbaa.jevgr('vaqrk %f\a' % anzr) fhttrfgrq[anzr] = 1 ryfr: j._enj_jevgr([ohs]) # ABGERNPURQ qrs ernq_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() e = tvg.ernq_ers(ersanzr) pbaa.jevgr('%f\a' % (e be '').rapbqr('urk')) pbaa.bx() qrs hcqngr_ers(pbaa, ersanzr): tvg.purpx_ercb_be_qvr() arjiny = pbaa.ernqyvar().fgevc() byqiny = pbaa.ernqyvar().fgevc() tvg.hcqngr_ers(ersanzr, arjiny.qrpbqr('urk'), byqiny.qrpbqr('urk')) pbaa.bx() qrs png(pbaa, vq): tvg.purpx_ercb_be_qvr() gel: sbe oybo va tvg.png(vq): pbaa.jevgr(fgehpg.cnpx('!V', yra(oybo))) pbaa.jevgr(oybo) rkprcg XrlReebe, r: ybt('freire: reebe: %f\a' % r) pbaa.jevgr('\0\0\0\0') pbaa.reebe(r) ryfr: pbaa.jevgr('\0\0\0\0') pbaa.bx() bcgfcrp = """ ohc freire """ b = bcgvbaf.Bcgvbaf('ohc freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') ybt('ohc freire: ernqvat sebz fgqva.\a') pbzznaqf = { 'vavg-qve': vavg_qve, 'frg-qve': frg_qve, 'yvfg-vaqrkrf': yvfg_vaqrkrf, 'fraq-vaqrk': fraq_vaqrk, 'erprvir-bowrpgf': erprvir_bowrpgf, 'ernq-ers': ernq_ers, 'hcqngr-ers': hcqngr_ers, 'png': png, } # SVKZR: guvf cebgbpby vf gbgnyyl ynzr naq abg ng nyy shgher-cebbs. # (Rfcrpvnyyl fvapr jr nobeg pbzcyrgryl nf fbba nf *nalguvat* onq unccraf) pbaa = Pbaa(flf.fgqva, flf.fgqbhg) ye = yvarernqre(pbaa) sbe _yvar va ye: yvar = _yvar.fgevc() vs abg yvar: pbagvahr ybt('ohc freire: pbzznaq: %e\a' % yvar) jbeqf = yvar.fcyvg(' ', 1) pzq = jbeqf[0] erfg = yra(jbeqf)>1 naq jbeqf[1] be '' vs pzq == 'dhvg': oernx ryfr: pzq = pbzznaqf.trg(pzq) vs pzq: pzq(pbaa, erfg) ryfr: envfr Rkprcgvba('haxabja freire pbzznaq: %e\a' % yvar) ybt('ohc freire: qbar\a') #!/hfe/ova/rai clguba vzcbeg flf, gvzr, fgehpg sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, pyvrag sebz ohc.urycref vzcbeg * sebz fhocebprff vzcbeg CVCR bcgfcrp = """ ohc wbva [-e ubfg:cngu] [ersf be unfurf...] -- e,erzbgr= erzbgr ercbfvgbel cngu """ b = bcgvbaf.Bcgvbaf('ohc wbva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg rkgen: rkgen = yvarernqre(flf.fgqva) erg = 0 vs bcg.erzbgr: pyv = pyvrag.Pyvrag(bcg.erzbgr) png = pyv.png ryfr: pc = tvg.PngCvcr() png = pc.wbva sbe vq va rkgen: gel: sbe oybo va png(vq): flf.fgqbhg.jevgr(oybo) rkprcg XrlReebe, r: flf.fgqbhg.syhfu() ybt('reebe: %f\a' % r) erg = 1 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, er, reeab, fgng, gvzr, zngu sebz ohc vzcbeg unfufcyvg, tvg, bcgvbaf, vaqrk, pyvrag sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc fnir [-gp] [-a anzr] -- e,erzbgr= erzbgr ercbfvgbel cngu g,gerr bhgchg n gerr vq p,pbzzvg bhgchg n pbzzvg vq a,anzr= anzr bs onpxhc frg gb hcqngr (vs nal) i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) d,dhvrg qba'g fubj cebterff zrgre fznyyre= bayl onpx hc svyrf fznyyre guna a olgrf """ b = bcgvbaf.Bcgvbaf('ohc fnir', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) tvg.purpx_ercb_be_qvr() vs abg (bcg.gerr be bcg.pbzzvg be bcg.anzr): b.sngny("hfr bar be zber bs -g, -p, -a") vs abg rkgen: b.sngny("ab svyranzrf tvira") bcg.cebterff = (vfggl naq abg bcg.dhvrg) bcg.fznyyre = cnefr_ahz(bcg.fznyyre be 0) vf_erirefr = bf.raiveba.trg('OHC_FREIRE_ERIREFR') vs vf_erirefr naq bcg.erzbgr: b.sngny("qba'g hfr -e va erirefr zbqr; vg'f nhgbzngvp") ersanzr = bcg.anzr naq 'ersf/urnqf/%f' % bcg.anzr be Abar vs bcg.erzbgr be vf_erirefr: pyv = pyvrag.Pyvrag(bcg.erzbgr) byqers = ersanzr naq pyv.ernq_ers(ersanzr) be Abar j = pyv.arj_cnpxjevgre() ryfr: pyv = Abar byqers = ersanzr naq tvg.ernq_ers(ersanzr) be Abar j = tvg.CnpxJevgre() unaqyr_pgey_p() qrs rngfynfu(qve): vs qve.raqfjvgu('/'): erghea qve[:-1] ryfr: erghea qve cnegf = [''] funyvfgf = [[]] qrs _chfu(cneg): nffreg(cneg) cnegf.nccraq(cneg) funyvfgf.nccraq([]) qrs _cbc(sbepr_gerr): nffreg(yra(cnegf) >= 1) cneg = cnegf.cbc() funyvfg = funyvfgf.cbc() gerr = sbepr_gerr be j.arj_gerr(funyvfg) vs funyvfgf: funyvfgf[-1].nccraq(('40000', cneg, gerr)) ryfr: # guvf jnf gur gbcyriry, fb chg vg onpx sbe fnavgl funyvfgf.nccraq(funyvfg) erghea gerr ynfgerznva = Abar qrs cebterff_ercbeg(a): tybony pbhag, fhopbhag, ynfgerznva fhopbhag += a pp = pbhag + fhopbhag cpg = gbgny naq (pp*100.0/gbgny) be 0 abj = gvzr.gvzr() ryncfrq = abj - gfgneg xcf = ryncfrq naq vag(pp/1024./ryncfrq) xcf_senp = 10 ** vag(zngu.ybt(xcf+1, 10) - 1) xcf = vag(xcf/xcf_senp)*xcf_senp vs pp: erznva = ryncfrq*1.0/pp * (gbgny-pp) ryfr: erznva = 0.0 vs (ynfgerznva naq (erznva > ynfgerznva) naq ((erznva - ynfgerznva)/ynfgerznva < 0.05)): erznva = ynfgerznva ryfr: ynfgerznva = erznva ubhef = vag(erznva/60/60) zvaf = vag(erznva/60 - ubhef*60) frpf = vag(erznva - ubhef*60*60 - zvaf*60) vs ryncfrq < 30: erznvafge = '' xcffge = '' ryfr: xcffge = '%qx/f' % xcf vs ubhef: erznvafge = '%qu%qz' % (ubhef, zvaf) ryvs zvaf: erznvafge = '%qz%q' % (zvaf, frpf) ryfr: erznvafge = '%qf' % frpf cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf) %f %f\e' % (cpg, pp/1024, gbgny/1024, spbhag, sgbgny, erznvafge, xcffge)) e = vaqrk.Ernqre(tvg.ercb('ohcvaqrk')) qrs nyernql_fnirq(rag): erghea rag.vf_inyvq() naq j.rkvfgf(rag.fun) naq rag.fun qrs jnagerphefr_cer(rag): erghea abg nyernql_fnirq(rag) qrs jnagerphefr_qhevat(rag): erghea abg nyernql_fnirq(rag) be rag.fun_zvffvat() gbgny = sgbgny = 0 vs bcg.cebterff: sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_cer): vs abg (sgbgny % 10024): cebterff('Ernqvat vaqrk: %q\e' % sgbgny) rkvfgf = rag.rkvfgf() unfuinyvq = nyernql_fnirq(rag) rag.frg_fun_zvffvat(abg unfuinyvq) vs abg bcg.fznyyre be rag.fvmr < bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: gbgny += rag.fvmr sgbgny += 1 cebterff('Ernqvat vaqrk: %q, qbar.\a' % sgbgny) unfufcyvg.cebterff_pnyyonpx = cebterff_ercbeg gfgneg = gvzr.gvzr() pbhag = fhopbhag = spbhag = 0 ynfgfxvc_anzr = Abar ynfgqve = '' sbe (genafanzr,rag) va e.svygre(rkgen, jnagerphefr=jnagerphefr_qhevat): (qve, svyr) = bf.cngu.fcyvg(rag.anzr) rkvfgf = (rag.syntf & vaqrk.VK_RKVFGF) unfuinyvq = nyernql_fnirq(rag) jnfzvffvat = rag.fun_zvffvat() byqfvmr = rag.fvmr vs bcg.ireobfr: vs abg rkvfgf: fgnghf = 'Q' ryvs abg unfuinyvq: vs rag.fun == vaqrk.RZCGL_FUN: fgnghf = 'N' ryfr: fgnghf = 'Z' ryfr: fgnghf = ' ' vs bcg.ireobfr >= 2: ybt('%f %-70f\a' % (fgnghf, rag.anzr)) ryvs abg fgng.F_VFQVE(rag.zbqr) naq ynfgqve != qve: vs abg ynfgqve.fgnegfjvgu(qve): ybt('%f %-70f\a' % (fgnghf, bf.cngu.wbva(qve, ''))) ynfgqve = qve vs bcg.cebterff: cebterff_ercbeg(0) spbhag += 1 vs abg rkvfgf: pbagvahr vs bcg.fznyyre naq rag.fvmr >= bcg.fznyyre: vs rkvfgf naq abg unfuinyvq: nqq_reebe('fxvccvat ynetr svyr "%f"' % rag.anzr) ynfgfxvc_anzr = rag.anzr pbagvahr nffreg(qve.fgnegfjvgu('/')) qvec = qve.fcyvg('/') juvyr cnegf > qvec: _cbc(sbepr_gerr = Abar) vs qve != '/': sbe cneg va qvec[yra(cnegf):]: _chfu(cneg) vs abg svyr: # ab svyranzr cbegvba zrnaf guvf vf n fhoqve. Ohg # fho/cneragqverpgbevrf nyernql unaqyrq va gur cbc/chfu() cneg nobir. byqgerr = nyernql_fnirq(rag) # znl or Abar arjgerr = _cbc(sbepr_gerr = byqgerr) vs abg byqgerr: vs ynfgfxvc_anzr naq ynfgfxvc_anzr.fgnegfjvgu(rag.anzr): rag.vainyvqngr() ryfr: rag.inyvqngr(040000, arjgerr) rag.ercnpx() vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr pbagvahr # vg'f abg n qverpgbel vq = Abar vs unfuinyvq: zbqr = '%b' % rag.tvgzbqr vq = rag.fun funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) ryfr: vs fgng.F_VFERT(rag.zbqr): gel: s = unfufcyvg.bcra_abngvzr(rag.anzr) rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = unfufcyvg.fcyvg_gb_oybo_be_gerr(j, [s]) ryfr: vs fgng.F_VFQVE(rag.zbqr): nffreg(0) # unaqyrq nobir ryvs fgng.F_VFYAX(rag.zbqr): gel: ey = bf.ernqyvax(rag.anzr) rkprcg BFReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr rkprcg VBReebe, r: nqq_reebe(r) ynfgfxvc_anzr = rag.anzr ryfr: (zbqr, vq) = ('120000', j.arj_oybo(ey)) ryfr: nqq_reebe(Rkprcgvba('fxvccvat fcrpvny svyr "%f"' % rag.anzr)) ynfgfxvc_anzr = rag.anzr vs vq: rag.inyvqngr(vag(zbqr, 8), vq) rag.ercnpx() funyvfgf[-1].nccraq((zbqr, tvg.znatyr_anzr(svyr, rag.zbqr, rag.tvgzbqr), vq)) vs rkvfgf naq jnfzvffvat: pbhag += byqfvmr fhopbhag = 0 vs bcg.cebterff: cpg = gbgny naq pbhag*100.0/gbgny be 100 cebterff('Fnivat: %.2s%% (%q/%qx, %q/%q svyrf), qbar. \a' % (cpg, pbhag/1024, gbgny/1024, spbhag, sgbgny)) juvyr yra(cnegf) > 1: _cbc(sbepr_gerr = Abar) nffreg(yra(funyvfgf) == 1) gerr = j.arj_gerr(funyvfgf[-1]) vs bcg.gerr: cevag gerr.rapbqr('urk') vs bcg.pbzzvg be bcg.anzr: zft = 'ohc fnir\a\aTrarengrq ol pbzznaq:\a%e' % flf.neti ers = bcg.anzr naq ('ersf/urnqf/%f' % bcg.anzr) be Abar pbzzvg = j.arj_pbzzvg(byqers, gerr, zft) vs bcg.pbzzvg: cevag pbzzvg.rapbqr('urk') j.pybfr() # zhfg pybfr orsber jr pna hcqngr gur ers vs bcg.anzr: vs pyv: pyv.hcqngr_ers(ersanzr, pbzzvg, byqers) ryfr: tvg.hcqngr_ers(ersanzr, pbzzvg, byqers) vs pyv: pyv.pybfr() vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq juvyr fnivat.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, gvzr sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc gvpx """ b = bcgvbaf.Bcgvbaf('ohc gvpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") g = gvzr.gvzr() gyrsg = 1 - (g - vag(g)) gvzr.fyrrc(gyrsg) #!/hfe/ova/rai clguba vzcbeg bf, flf, fgng, gvzr sebz ohc vzcbeg bcgvbaf, tvg, vaqrk, qerphefr sebz ohc.urycref vzcbeg * qrs zretr_vaqrkrf(bhg, e1, e2): sbe r va vaqrk.ZretrVgre([e1, e2]): # SVKZR: fubhyqa'g jr erzbir qryrgrq ragevrf riraghnyyl? Jura? bhg.nqq_vkragel(r) pynff VgreUrycre: qrs __vavg__(frys, y): frys.v = vgre(y) frys.phe = Abar frys.arkg() qrs arkg(frys): gel: frys.phe = frys.v.arkg() rkprcg FgbcVgrengvba: frys.phe = Abar erghea frys.phe qrs purpx_vaqrk(ernqre): gel: ybt('purpx: purpxvat sbejneq vgrengvba...\a') r = Abar q = {} sbe r va ernqre.sbejneq_vgre(): vs r.puvyqera_a: vs bcg.ireobfr: ybt('%08k+%-4q %e\a' % (r.puvyqera_bsf, r.puvyqera_a, r.anzr)) nffreg(r.puvyqera_bsf) nffreg(r.anzr.raqfjvgu('/')) nffreg(abg q.trg(r.puvyqera_bsf)) q[r.puvyqera_bsf] = 1 vs r.syntf & vaqrk.VK_UNFUINYVQ: nffreg(r.fun != vaqrk.RZCGL_FUN) nffreg(r.tvgzbqr) nffreg(abg r be r.anzr == '/') # ynfg ragel vf *nyjnlf* / ybt('purpx: purpxvat abezny vgrengvba...\a') ynfg = Abar sbe r va ernqre: vs ynfg: nffreg(ynfg > r.anzr) ynfg = r.anzr rkprcg: ybt('vaqrk reebe! ng %e\a' % r) envfr ybt('purpx: cnffrq.\a') qrs hcqngr_vaqrk(gbc): ev = vaqrk.Ernqre(vaqrksvyr) jv = vaqrk.Jevgre(vaqrksvyr) evt = VgreUrycre(ev.vgre(anzr=gbc)) gfgneg = vag(gvzr.gvzr()) unfutra = Abar vs bcg.snxr_inyvq: qrs unfutra(anzr): erghea (0100644, vaqrk.SNXR_FUN) gbgny = 0 sbe (cngu,cfg) va qerphefr.erphefvir_qveyvfg([gbc], kqri=bcg.kqri): vs bcg.ireobfr>=2 be (bcg.ireobfr==1 naq fgng.F_VFQVE(cfg.fg_zbqr)): flf.fgqbhg.jevgr('%f\a' % cngu) flf.fgqbhg.syhfu() cebterff('Vaqrkvat: %q\e' % gbgny) ryvs abg (gbgny % 128): cebterff('Vaqrkvat: %q\e' % gbgny) gbgny += 1 juvyr evt.phe naq evt.phe.anzr > cngu: # qryrgrq cnguf vs evt.phe.rkvfgf(): evt.phe.frg_qryrgrq() evt.phe.ercnpx() evt.arkg() vs evt.phe naq evt.phe.anzr == cngu: # cnguf gung nyernql rkvfgrq vs cfg: evt.phe.sebz_fgng(cfg, gfgneg) vs abg (evt.phe.syntf & vaqrk.VK_UNFUINYVQ): vs unfutra: (evt.phe.tvgzbqr, evt.phe.fun) = unfutra(cngu) evt.phe.syntf |= vaqrk.VK_UNFUINYVQ vs bcg.snxr_vainyvq: evt.phe.vainyvqngr() evt.phe.ercnpx() evt.arkg() ryfr: # arj cnguf jv.nqq(cngu, cfg, unfutra = unfutra) cebterff('Vaqrkvat: %q, qbar.\a' % gbgny) vs ev.rkvfgf(): ev.fnir() jv.syhfu() vs jv.pbhag: je = jv.arj_ernqre() vs bcg.purpx: ybt('purpx: orsber zretvat: byqsvyr\a') purpx_vaqrk(ev) ybt('purpx: orsber zretvat: arjsvyr\a') purpx_vaqrk(je) zv = vaqrk.Jevgre(vaqrksvyr) zretr_vaqrkrf(zv, ev, je) ev.pybfr() zv.pybfr() je.pybfr() jv.nobeg() ryfr: jv.pybfr() bcgfcrp = """ ohc vaqrk <-c|z|h> [bcgvbaf...] -- c,cevag cevag gur vaqrk ragevrf sbe gur tvira anzrf (nyfb jbexf jvgu -h) z,zbqvsvrq cevag bayl nqqrq/qryrgrq/zbqvsvrq svyrf (vzcyvrf -c) f,fgnghf cevag rnpu svyranzr jvgu n fgnghf pune (N/Z/Q) (vzcyvrf -c) U,unfu cevag gur unfu sbe rnpu bowrpg arkg gb vgf anzr (vzcyvrf -c) y,ybat cevag zber vasbezngvba nobhg rnpu svyr h,hcqngr (erphefviryl) hcqngr gur vaqrk ragevrf sbe gur tvira svyranzrf k,kqri,bar-svyr-flfgrz qba'g pebff svyrflfgrz obhaqnevrf snxr-inyvq znex nyy vaqrk ragevrf nf hc-gb-qngr rira vs gurl nera'g snxr-vainyvq znex nyy vaqrk ragevrf nf vainyvq purpx pnershyyl purpx vaqrk svyr vagrtevgl s,vaqrksvyr= gur anzr bs gur vaqrk svyr (qrsnhyg 'vaqrk') i,ireobfr vapernfr ybt bhgchg (pna or hfrq zber guna bapr) """ b = bcgvbaf.Bcgvbaf('ohc vaqrk', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs abg (bcg.zbqvsvrq be bcg['cevag'] be bcg.fgnghf be bcg.hcqngr be bcg.purpx): b.sngny('fhccyl bar be zber bs -c, -f, -z, -h, be --purpx') vs (bcg.snxr_inyvq be bcg.snxr_vainyvq) naq abg bcg.hcqngr: b.sngny('--snxr-{va,}inyvq ner zrnavatyrff jvgubhg -h') vs bcg.snxr_inyvq naq bcg.snxr_vainyvq: b.sngny('--snxr-inyvq vf vapbzcngvoyr jvgu --snxr-vainyvq') tvg.purpx_ercb_be_qvr() vaqrksvyr = bcg.vaqrksvyr be tvg.ercb('ohcvaqrk') unaqyr_pgey_p() vs bcg.purpx: ybt('purpx: fgnegvat vavgvny purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) cnguf = vaqrk.erqhpr_cnguf(rkgen) vs bcg.hcqngr: vs abg cnguf: b.sngny('hcqngr (-h) erdhrfgrq ohg ab cnguf tvira') sbe (ec,cngu) va cnguf: hcqngr_vaqrk(ec) vs bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq: sbe (anzr, rag) va vaqrk.Ernqre(vaqrksvyr).svygre(rkgen be ['']): vs (bcg.zbqvsvrq naq (rag.vf_inyvq() be rag.vf_qryrgrq() be abg rag.zbqr)): pbagvahr yvar = '' vs bcg.fgnghf: vs rag.vf_qryrgrq(): yvar += 'Q ' ryvs abg rag.vf_inyvq(): vs rag.fun == vaqrk.RZCGL_FUN: yvar += 'N ' ryfr: yvar += 'Z ' ryfr: yvar += ' ' vs bcg.unfu: yvar += rag.fun.rapbqr('urk') + ' ' vs bcg.ybat: yvar += "%7f %7f " % (bpg(rag.zbqr), bpg(rag.tvgzbqr)) cevag yvar + (anzr be './') vs bcg.purpx naq (bcg['cevag'] be bcg.fgnghf be bcg.zbqvsvrq be bcg.hcqngr): ybt('purpx: fgnegvat svany purpx.\a') purpx_vaqrk(vaqrk.Ernqre(vaqrksvyr)) vs fnirq_reebef: ybt('JNEAVAT: %q reebef rapbhagrerq.\a' % yra(fnirq_reebef)) flf.rkvg(1) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg sebz ohc vzcbeg bcgvbaf, urycref bcgfcrp = """ ohc eonpxhc-freire -- Guvf pbzznaq vf abg vagraqrq gb or eha znahnyyl. """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc-freire', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny('ab nethzragf rkcrpgrq') # trg gur fhopbzznaq'f neti. # Abeznyyl jr pbhyq whfg cnff guvf ba gur pbzznaq yvar, ohg fvapr jr'yy bsgra # or trggvat pnyyrq ba gur bgure raq bs na ffu cvcr, juvpu graqf gb znatyr # neti (ol fraqvat vg ivn gur furyy), guvf jnl vf zhpu fnsre. ohs = flf.fgqva.ernq(4) fm = fgehpg.hacnpx('!V', ohs)[0] nffreg(fm > 0) nffreg(fm < 1000000) ohs = flf.fgqva.ernq(fm) nffreg(yra(ohs) == fm) neti = ohs.fcyvg('\0') # fgqva/fgqbhg ner fhccbfrqyl pbaarpgrq gb 'ohc freire' gung gur pnyyre # fgnegrq sbe hf (bsgra ba gur bgure raq bs na ffu ghaary), fb jr qba'g jnag # gb zvfhfr gurz. Zbir gurz bhg bs gur jnl, gura ercynpr fgqbhg jvgu # n cbvagre gb fgqree va pnfr bhe fhopbzznaq jnagf gb qb fbzrguvat jvgu vg. # # Vg zvtug or avpr gb qb gur fnzr jvgu fgqva, ohg zl rkcrevzragf fubjrq gung # ffu frrzf gb znxr vgf puvyq'f fgqree n ernqnoyr-ohg-arire-ernqf-nalguvat # fbpxrg. Gurl ernyyl fubhyq unir hfrq fuhgqbja(FUHG_JE) ba gur bgure raq # bs vg, ohg cebonoyl qvqa'g. Naljnl, vg'f gbb zrffl, fb yrg'f whfg znxr fher # nalbar ernqvat sebz fgqva vf qvfnccbvagrq. # # (Lbh pna'g whfg yrnir fgqva/fgqbhg "abg bcra" ol pybfvat gur svyr # qrfpevcgbef. Gura gur arkg svyr gung bcraf vf nhgbzngvpnyyl nffvtarq 0 be 1, # naq crbcyr *gelvat* gb ernq/jevgr fgqva/fgqbhg trg fperjrq.) bf.qhc2(0, 3) bf.qhc2(1, 4) bf.qhc2(2, 1) sq = bf.bcra('/qri/ahyy', bf.B_EQBAYL) bf.qhc2(sq, 0) bf.pybfr(sq) bf.raiveba['OHC_FREIRE_ERIREFR'] = urycref.ubfganzr() bf.rkrpic(neti[0], neti) flf.rkvg(99) #!/hfe/ova/rai clguba vzcbeg flf, bf, tybo, fhocebprff, gvzr sebz ohc vzcbeg bcgvbaf, tvg sebz ohc.urycref vzcbeg * cne2_bx = 0 ahyys = bcra('/qri/ahyy') qrs qroht(f): vs bcg.ireobfr: ybt(f) qrs eha(neti): # ng yrnfg va clguba 2.5, hfvat "fgqbhg=2" be "fgqbhg=flf.fgqree" orybj # qbrfa'g npghnyyl jbex, orpnhfr fhocebprff pybfrf sq #2 evtug orsber # rkrpvat sbe fbzr ernfba. Fb jr jbex nebhaq vg ol qhcyvpngvat gur sq # svefg. sq = bf.qhc(2) # pbcl fgqree gel: c = fhocebprff.Cbcra(neti, fgqbhg=sq, pybfr_sqf=Snyfr) erghea c.jnvg() svanyyl: bf.pybfr(sq) qrs cne2_frghc(): tybony cne2_bx ei = 1 gel: c = fhocebprff.Cbcra(['cne2', '--uryc'], fgqbhg=ahyys, fgqree=ahyys, fgqva=ahyys) ei = c.jnvg() rkprcg BFReebe: ybt('sfpx: jneavat: cne2 abg sbhaq; qvfnoyvat erpbirel srngherf.\a') ryfr: cne2_bx = 1 qrs cnei(yiy): vs bcg.ireobfr >= yiy: vs vfggl: erghea [] ryfr: erghea ['-d'] ryfr: erghea ['-dd'] qrs cne2_trarengr(onfr): erghea eha(['cne2', 'perngr', '-a1', '-p200'] + cnei(2) + ['--', onfr, onfr+'.cnpx', onfr+'.vqk']) qrs cne2_irevsl(onfr): erghea eha(['cne2', 'irevsl'] + cnei(3) + ['--', onfr]) qrs cne2_ercnve(onfr): erghea eha(['cne2', 'ercnve'] + cnei(2) + ['--', onfr]) qrs dhvpx_irevsl(onfr): s = bcra(onfr + '.cnpx', 'eo') s.frrx(-20, 2) jnagfhz = s.ernq(20) nffreg(yra(jnagfhz) == 20) s.frrx(0) fhz = Fun1() sbe o va puhaxlernqre(s, bf.sfgng(s.svyrab()).fg_fvmr - 20): fhz.hcqngr(o) vs fhz.qvtrfg() != jnagfhz: envfr InyhrReebe('rkcrpgrq %e, tbg %e' % (jnagfhz.rapbqr('urk'), fhz.urkqvtrfg())) qrs tvg_irevsl(onfr): vs bcg.dhvpx: gel: dhvpx_irevsl(onfr) rkprcg Rkprcgvba, r: qroht('reebe: %f\a' % r) erghea 1 erghea 0 ryfr: erghea eha(['tvg', 'irevsl-cnpx', '--', onfr]) qrs qb_cnpx(onfr, ynfg): pbqr = 0 vs cne2_bx naq cne2_rkvfgf naq (bcg.ercnve be abg bcg.trarengr): ierfhyg = cne2_irevsl(onfr) vs ierfhyg != 0: vs bcg.ercnve: eerfhyg = cne2_ercnve(onfr) vs eerfhyg != 0: cevag '%f cne2 ercnve: snvyrq (%q)' % (ynfg, eerfhyg) pbqr = eerfhyg ryfr: cevag '%f cne2 ercnve: fhpprrqrq (0)' % ynfg pbqr = 100 ryfr: cevag '%f cne2 irevsl: snvyrq (%q)' % (ynfg, ierfhyg) pbqr = ierfhyg ryfr: cevag '%f bx' % ynfg ryvs abg bcg.trarengr be (cne2_bx naq abg cne2_rkvfgf): terfhyg = tvg_irevsl(onfr) vs terfhyg != 0: cevag '%f tvg irevsl: snvyrq (%q)' % (ynfg, terfhyg) pbqr = terfhyg ryfr: vs cne2_bx naq bcg.trarengr: cerfhyg = cne2_trarengr(onfr) vs cerfhyg != 0: cevag '%f cne2 perngr: snvyrq (%q)' % (ynfg, cerfhyg) pbqr = cerfhyg ryfr: cevag '%f bx' % ynfg ryfr: cevag '%f bx' % ynfg ryfr: nffreg(bcg.trarengr naq (abg cne2_bx be cne2_rkvfgf)) qroht(' fxvccrq: cne2 svyr nyernql trarengrq.\a') erghea pbqr bcgfcrp = """ ohc sfpx [bcgvbaf...] [svyranzrf...] -- e,ercnve nggrzcg gb ercnve reebef hfvat cne2 (qnatrebhf!) t,trarengr trarengr nhgb-ercnve vasbezngvba hfvat cne2 i,ireobfr vapernfr ireobfvgl (pna or hfrq zber guna bapr) dhvpx whfg purpx cnpx fun1fhz, qba'g hfr tvg irevsl-cnpx w,wbof= eha 'a' wbof va cnenyyry cne2-bx vzzrqvngryl erghea 0 vs cne2 vf bx, 1 vs abg qvfnoyr-cne2 vtaber cne2 rira vs vg vf ninvynoyr """ b = bcgvbaf.Bcgvbaf('ohc sfpx', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) cne2_frghc() vs bcg.cne2_bx: vs cne2_bx: flf.rkvg(0) # 'gehr' va fu ryfr: flf.rkvg(1) vs bcg.qvfnoyr_cne2: cne2_bx = 0 tvg.purpx_ercb_be_qvr() vs abg rkgen: qroht('sfpx: Ab svyranzrf tvira: purpxvat nyy cnpxf.\a') rkgen = tybo.tybo(tvg.ercb('bowrpgf/cnpx/*.cnpx')) pbqr = 0 pbhag = 0 bhgfgnaqvat = {} sbe anzr va rkgen: vs anzr.raqfjvgu('.cnpx'): onfr = anzr[:-5] ryvs anzr.raqfjvgu('.vqk'): onfr = anzr[:-4] ryvs anzr.raqfjvgu('.cne2'): onfr = anzr[:-5] ryvs bf.cngu.rkvfgf(anzr + '.cnpx'): onfr = anzr ryfr: envfr Rkprcgvba('%f vf abg n cnpx svyr!' % anzr) (qve,ynfg) = bf.cngu.fcyvg(onfr) cne2_rkvfgf = bf.cngu.rkvfgf(onfr + '.cne2') vs cne2_rkvfgf naq bf.fgng(onfr + '.cne2').fg_fvmr == 0: cne2_rkvfgf = 0 flf.fgqbhg.syhfu() qroht('sfpx: purpxvat %f (%f)\a' % (ynfg, cne2_bx naq cne2_rkvfgf naq 'cne2' be 'tvg')) vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.wbof: ap = qb_cnpx(onfr, ynfg) pbqr = pbqr be ap pbhag += 1 ryfr: juvyr yra(bhgfgnaqvat) >= bcg.wbof: (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 cvq = bf.sbex() vs cvq: # cnerag bhgfgnaqvat[cvq] = 1 ryfr: # puvyq gel: flf.rkvg(qb_cnpx(onfr, ynfg)) rkprcg Rkprcgvba, r: ybt('rkprcgvba: %e\a' % r) flf.rkvg(99) juvyr yra(bhgfgnaqvat): (cvq,ap) = bf.jnvg() ap >>= 8 vs cvq va bhgfgnaqvat: qry bhgfgnaqvat[cvq] pbqr = pbqr be ap pbhag += 1 vs abg bcg.ireobfr: cebterff('sfpx (%q/%q)\e' % (pbhag, yra(rkgen))) vs abg bcg.ireobfr naq vfggl: ybt('sfpx qbar. \a') flf.rkvg(pbqr) #!/hfe/ova/rai clguba vzcbeg flf, bf, fgehpg, trgbcg, fhocebprff, fvtany sebz ohc vzcbeg bcgvbaf, ffu sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc eonpxhc vaqrk ... ohc eonpxhc fnir ... ohc eonpxhc fcyvg ... """ b = bcgvbaf.Bcgvbaf('ohc eonpxhc', bcgfcrp, bcgshap=trgbcg.trgbcg) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs yra(rkgen) < 2: b.sngny('nethzragf rkcrpgrq') pynff FvtRkprcgvba(Rkprcgvba): qrs __vavg__(frys, fvtahz): frys.fvtahz = fvtahz Rkprcgvba.__vavg__(frys, 'fvtany %q erprvirq' % fvtahz) qrs unaqyre(fvtahz, senzr): envfr FvtRkprcgvba(fvtahz) fvtany.fvtany(fvtany.FVTGREZ, unaqyre) fvtany.fvtany(fvtany.FVTVAG, unaqyre) fc = Abar c = Abar erg = 99 gel: ubfganzr = rkgen[0] neti = rkgen[1:] c = ffu.pbaarpg(ubfganzr, 'eonpxhc-freire') netif = '\0'.wbva(['ohc'] + neti) c.fgqva.jevgr(fgehpg.cnpx('!V', yra(netif)) + netif) c.fgqva.syhfu() znva_rkr = bf.raiveba.trg('OHC_ZNVA_RKR') be flf.neti[0] fc = fhocebprff.Cbcra([znva_rkr, 'freire'], fgqva=c.fgqbhg, fgqbhg=c.fgqva) c.fgqva.pybfr() c.fgqbhg.pybfr() svanyyl: juvyr 1: # vs jr trg n fvtany juvyr jnvgvat, jr unir gb xrrc jnvgvat, whfg # va pnfr bhe puvyq qbrfa'g qvr. gel: erg = c.jnvg() fc.jnvg() oernx rkprcg FvtRkprcgvba, r: ybt('\aohc eonpxhc: %f\a' % r) bf.xvyy(c.cvq, r.fvtahz) erg = 84 flf.rkvg(erg) #!/hfe/ova/rai clguba vzcbeg flf, bf, er sebz ohc vzcbeg bcgvbaf bcgfcrp = """ ohc arjyvare """ b = bcgvbaf.Bcgvbaf('ohc arjyvare', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") e = er.pbzcvyr(e'([\e\a])') ynfgyra = 0 nyy = '' juvyr 1: y = e.fcyvg(nyy, 1) vs yra(y) <= 1: gel: o = bf.ernq(flf.fgqva.svyrab(), 4096) rkprcg XrlobneqVagreehcg: oernx vs abg o: oernx nyy += o ryfr: nffreg(yra(y) == 3) (yvar, fcyvgpune, nyy) = y #fcyvgpune = '\a' flf.fgqbhg.jevgr('%-*f%f' % (ynfgyra, yvar, fcyvgpune)) vs fcyvgpune == '\e': ynfgyra = yra(yvar) ryfr: ynfgyra = 0 flf.fgqbhg.syhfu() vs ynfgyra be nyy: flf.fgqbhg.jevgr('%-*f\a' % (ynfgyra, nyy)) #!/hfe/ova/rai clguba vzcbeg flf sebz ohc vzcbeg bcgvbaf, tvg, _unfufcyvg sebz ohc.urycref vzcbeg * bcgfcrp = """ ohc znetva """ b = bcgvbaf.Bcgvbaf('ohc znetva', bcgfcrp) (bcg, syntf, rkgen) = b.cnefr(flf.neti[1:]) vs rkgen: b.sngny("ab nethzragf rkcrpgrq") tvg.purpx_ercb_be_qvr() #tvg.vtaber_zvqk = 1 zv = tvg.CnpxVqkYvfg(tvg.ercb('bowrpgf/cnpx')) ynfg = '\0'*20 ybatzngpu = 0 sbe v va zv: vs v == ynfg: pbagvahr #nffreg(fge(v) >= ynfg) cz = _unfufcyvg.ovgzngpu(ynfg, v) ybatzngpu = znk(ybatzngpu, cz) ynfg = v cevag ybatzngpu bup-0.29/t/unknown-owner000077500000000000000000000010741303127641400152550ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/../cmd/bup-python" || exit $? exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble import grp import pwd import sys def usage(): print >> sys.stderr, "Usage: unknown-owner (--user | --group)" if len(sys.argv) != 2: usage() sys.exit(1) if sys.argv[1] == '--user': max_name_len = max([len(x.pw_name) for x in pwd.getpwall()]) elif sys.argv[1] == '--group': max_name_len = max([len(x.gr_name) for x in grp.getgrall()]) else: usage() sys.exit(1) print 'x' * (max_name_len + 1) bup-0.29/wvtest000077500000000000000000000176751303127641400135350ustar00rootroot00000000000000#!/usr/bin/env perl # # WvTest: # Copyright (C) 2007-2009 Versabanq Innovations Inc. and contributors. # Copyright (C) 2015 Rob Browning # Licensed under the GNU Library General Public License, version 2. # See the included file named LICENSE for license information. # use strict; use warnings; use Getopt::Long qw(GetOptionsFromArray :config no_ignore_case bundling); use Pod::Usage; use Time::HiRes qw(time); my $pid; my $istty = -t STDOUT; my @log = (); sub bigkill($) { my $pid = shift; if (@log) { print "\n" . join("\n", @log) . "\n"; } print STDERR "\n! Killed by signal FAILED\n"; ($pid > 0) || die("pid is '$pid'?!\n"); local $SIG{CHLD} = sub { }; # this will wake us from sleep() faster kill 15, $pid; sleep(2); if ($pid > 1) { kill 9, -$pid; } kill 9, $pid; exit(125); } sub colourize($) { my $result = shift; my $pass = ($result eq "ok"); if ($istty) { my $colour = $pass ? "\e[32;1m" : "\e[31;1m"; return "$colour$result\e[0m"; } else { return $result; } } sub mstime($$$) { my ($floatsec, $warntime, $badtime) = @_; my $ms = int($floatsec * 1000); my $str = sprintf("%d.%03ds", $ms/1000, $ms % 1000); if ($istty && $ms > $badtime) { return "\e[31;1m$str\e[0m"; } elsif ($istty && $ms > $warntime) { return "\e[33;1m$str\e[0m"; } else { return "$str"; } } sub resultline($$) { my ($name, $result) = @_; return sprintf("! %-65s %s", $name, colourize($result)); } my ($start, $stop); sub endsect() { $stop = time(); if ($start) { printf " %s %s\n", mstime($stop - $start, 500, 1000), colourize("ok"); } } sub run { # dup_msgs should be true when "watching". In that case all top # level wvtest protocol messages should be duplicated to stderr so # that they can be safely captured for report to process later. my ($dup_msgs) = @_; my $show_counts = 1; GetOptionsFromArray(\@ARGV, 'counts!', \$show_counts) or pod2usage(); pod2usage('$0: no command specified') if (@ARGV < 1); # always flush $| = 1; { my $msg = "Testing \"all\" in @ARGV:\n"; print $msg; print STDERR $msg if $dup_msgs; } $pid = open(my $fh, "-|"); if (!$pid) { # child setpgrp(); open STDERR, '>&STDOUT' or die("Can't dup stdout: $!\n"); exec(@ARGV); exit 126; # just in case } # parent my $allstart = time(); local $SIG{INT} = sub { bigkill($pid); }; local $SIG{TERM} = sub { bigkill($pid); }; local $SIG{ALRM} = sub { print STDERR resultline('Alarm timed out! No test results for too long.\n', 'FAILED'); bigkill($pid); }; my ($gpasses, $gfails) = (0,0); while (<$fh>) { chomp; s/\r//g; if (/^\s*Testing "(.*)" in (.*):\s*$/) { alarm(300); my ($sect, $file) = ($1, $2); endsect(); printf("! %s %s: ", $file, $sect); @log = (); $start = $stop; } elsif (/^!\s*(.*?)\s+(\S+)\s*$/) { alarm(300); my ($name, $result) = ($1, $2); my $pass = ($result eq "ok"); if (!$start) { printf("\n! Startup: "); $start = time(); } push @log, resultline($name, $result); if (!$pass) { $gfails++; if (@log) { print "\n" . join("\n", @log) . "\n"; @log = (); } } else { $gpasses++; print "."; } } else { push @log, $_; } } endsect(); my $newpid = waitpid($pid, 0); if ($newpid != $pid) { die("waitpid returned '$newpid', expected '$pid'\n"); } my $code = $?; my $ret = ($code >> 8); # return death-from-signal exits as >128. This is what bash does if you ran # the program directly. if ($code && !$ret) { $ret = $code | 128; } if ($ret && @log) { print "\n" . join("\n", @log) . "\n"; } if ($code != 0) { my $msg = resultline("Program returned non-zero exit code ($ret)", 'FAILED'); print $msg; print STDERR "$msg\n" if $dup_msgs; } print "\n"; if ($show_counts) { my $gtotal = $gpasses + $gfails; my $msg = sprintf("WvTest: %d test%s, %d failure%s\n", $gtotal, $gtotal == 1 ? "" : "s", $gfails, $gfails == 1 ? "" : "s"); print $msg; print STDERR $msg if $dup_msgs; } { my $msg = sprintf("WvTest: result code $ret, total time %s\n", mstime(time() - $allstart, 2000, 5000)); print $msg; print STDERR $msg if $dup_msgs; } return ($ret ? $ret : ($gfails ? 125 : 0)); } sub report() { my ($gpasses, $gfails) = (0,0); for my $f (@ARGV) { my $fh; open($fh, '<:crlf', $f) or die "Unable to open $f: $!"; while (<$fh>) { chomp; s/\r//g; if (/^\s*Testing "(.*)" in (.*):\s*$/) { @log = (); } elsif (/^!\s*(.*?)\s+(\S+)\s*$/) { my ($name, $result) = ($1, $2); my $pass = ($result eq "ok"); push @log, resultline($name, $result); if (!$pass) { $gfails++; if (@log) { print "\n" . join("\n", @log) . "\n"; @log = (); } } else { $gpasses++; } } else { push @log, $_; } } } my $gtotal = $gpasses + $gfails; printf("\nWvTest: %d test%s, %d failure%s\n", $gtotal, $gtotal == 1 ? "" : "s", $gfails, $gfails == 1 ? "" : "s"); return ($gfails ? 125 : 0); } my ($show_help, $show_manual); Getopt::Long::Configure('no_permute'); GetOptionsFromArray(\@ARGV, 'help|?' => \$show_help, 'man' => \$show_manual) or pod2usage(); Getopt::Long::Configure('permute'); pod2usage(-verbose => 1, -exitval => 0) if $show_help; pod2usage(-verbose => 2, -exitval => 0) if $show_manual; pod2usage(-msg => "$0: no action specified", -verbose => 1) if (@ARGV < 1); my $action = $ARGV[0]; shift @ARGV; if ($action eq 'run') { exit run(0); } elsif ($action eq 'watch') { run(1); } elsif ($action eq 'report') { exit report(); } else { pod2usage(-msg => "$0: invalid action $action", -verbose => 1); } __END__ =head1 NAME wvtest - the dumbest cross-platform test framework that could possibly work =head1 SYNOPSIS wvtest [GLOBAL...] run [RUN_OPT...] [--] command [arg...] wvtest [GLOBAL...] watch [RUN_OPT...] [--] command [arg...] wvtest [GLOBAL...] report [logfile...] GLOBAL: --help, -? display brief help message and exit --man display full documentation RUN_OPT: --[no-]counts [don't] show success/failure counts =head1 DESCRIPTION B will run some-tests and report on the result. This should work fine as long as some-tests doesn't run any sub-tests in parallel. If you'd like to run your tests in parallel, use B and B as described in the EXAMPLES below. =head1 EXAMPLES # Fine if ./tests doesn't produce any output in parallel. wvtest run ./tests # Use watch and report for parallel tests. Note that watch's stderr will # include copies of any top level messages - reporting non-zero # test command exits, etc., and so must be included in the report arguments. wvtest watch --no-counts \ "sh -c '(test-1 2>&1 | tee test-1.log)& (test-2 2>&1 | tee test-2.log)&'" \ 2>test-3.log \ wvtest report test-1.log test-2.log test-3.log =cut bup-0.29/wvtest-bash.sh000066400000000000000000000011161303127641400150350ustar00rootroot00000000000000 declare -a _wvbtstack _wvpushcall() { _wvbtstack[${#_wvbtstack[@]}]="$*" } _wvpopcall() { unset _wvbtstack[$((${#_wvbtstack[@]} - 1))] } _wvbacktrace() { local i loc local call=$((${#_wvbtstack[@]} - 1)) for ((i=0; i <= ${#FUNCNAME[@]}; i++)); do local name="${FUNCNAME[$i]}" if test "${name:0:2}" == WV; then loc="${BASH_SOURCE[$i+1]}:${BASH_LINENO[$i]}" echo "called from $loc ${FUNCNAME[$i]} ${_wvbtstack[$call]}" 1>&2 ((call--)) fi done } _wvfind_caller() { WVCALLER_FILE=${BASH_SOURCE[2]} WVCALLER_LINE=${BASH_LINENO[1]} } bup-0.29/wvtest-bup.sh000066400000000000000000000006731303127641400147150ustar00rootroot00000000000000# Include in your test script like this: # # #!/usr/bin/env bash # . ./wvtest-bup.sh . ./wvtest.sh _wvtop="$(pwd)" wvmktempdir () { local script_name="$(basename $0)" mkdir -p "$_wvtop/t/tmp" || exit $? mktemp -d "$_wvtop/t/tmp/$script_name-XXXXXXX" || exit $? } wvmkmountpt () { local script_name="$(basename $0)" mkdir -p "$_wvtop/t/mnt" || exit $? mktemp -d "$_wvtop/t/mnt/$script_name-XXXXXXX" || exit $? } bup-0.29/wvtest.py000077500000000000000000000201171303127641400141450ustar00rootroot00000000000000#!/bin/sh """": # -*-python-*- bup_python="$(dirname "$0")/cmd/bup-python" exec "$bup_python" "$0" ${1+"$@"} """ # end of bup preamble # # WvTest: # Copyright (C)2007-2012 Versabanq Innovations Inc. and contributors. # Licensed under the GNU Library General Public License, version 2. # See the included file named LICENSE for license information. # You can get wvtest from: http://github.com/apenwarr/wvtest # import atexit import inspect import os import re import sys import traceback _start_dir = os.getcwd() # NOTE # Why do we do we need the "!= main" check? Because if you run # wvtest.py as a main program and it imports your test files, then # those test files will try to import the wvtest module recursively. # That actually *works* fine, because we don't run this main program # when we're imported as a module. But you end up with two separate # wvtest modules, the one that gets imported, and the one that's the # main program. Each of them would have duplicated global variables # (most importantly, wvtest._registered), and so screwy things could # happen. Thus, we make the main program module *totally* different # from the imported module. Then we import wvtest (the module) into # wvtest (the main program) here and make sure to refer to the right # versions of global variables. # # All this is done just so that wvtest.py can be a single file that's # easy to import into your own applications. if __name__ != '__main__': # we're imported as a module _registered = [] _tests = 0 _fails = 0 def wvtest(func): """ Use this decorator (@wvtest) in front of any function you want to run as part of the unit test suite. Then run: python wvtest.py path/to/yourtest.py [other test.py files...] to run all the @wvtest functions in the given file(s). """ _registered.append(func) return func def _result(msg, tb, code): global _tests, _fails _tests += 1 if code != 'ok': _fails += 1 (filename, line, func, text) = tb filename = os.path.basename(filename) msg = re.sub(r'\s+', ' ', str(msg)) sys.stderr.flush() print '! %-70s %s' % ('%s:%-4d %s' % (filename, line, msg), code) sys.stdout.flush() def _caller_stack(wv_call_depth): # Without the chdir, the source text lookup may fail orig = os.getcwd() os.chdir(_start_dir) try: return traceback.extract_stack()[-(wv_call_depth + 2)] finally: os.chdir(orig) def _check(cond, msg = 'unknown', tb = None): if tb == None: tb = _caller_stack(2) if cond: _result(msg, tb, 'ok') else: _result(msg, tb, 'FAILED') return cond _code_rx = re.compile(r'^\w+\((.*)\)(\s*#.*)?$') def _code(): text = _caller_stack(2)[3] return _code_rx.sub(r'\1', text) def WVSTART(message): filename = _caller_stack(1)[0] sys.stderr.write('Testing \"' + message + '\" in ' + filename + ':\n') def WVMSG(message): ''' Issues a notification. ''' return _result(message, _caller_stack(1), 'ok') def WVPASS(cond = True): ''' Counts a test failure unless cond is true. ''' return _check(cond, _code()) def WVFAIL(cond = True): ''' Counts a test failure unless cond is false. ''' return _check(not cond, 'NOT(%s)' % _code()) def WVPASSEQ(a, b): ''' Counts a test failure unless a == b. ''' return _check(a == b, '%s == %s' % (repr(a), repr(b))) def WVPASSNE(a, b): ''' Counts a test failure unless a != b. ''' return _check(a != b, '%s != %s' % (repr(a), repr(b))) def WVPASSLT(a, b): ''' Counts a test failure unless a < b. ''' return _check(a < b, '%s < %s' % (repr(a), repr(b))) def WVPASSLE(a, b): ''' Counts a test failure unless a <= b. ''' return _check(a <= b, '%s <= %s' % (repr(a), repr(b))) def WVPASSGT(a, b): ''' Counts a test failure unless a > b. ''' return _check(a > b, '%s > %s' % (repr(a), repr(b))) def WVPASSGE(a, b): ''' Counts a test failure unless a >= b. ''' return _check(a >= b, '%s >= %s' % (repr(a), repr(b))) def WVEXCEPT(etype, func, *args, **kwargs): ''' Counts a test failure unless func throws an 'etype' exception. You have to spell out the function name and arguments, rather than calling the function yourself, so that WVEXCEPT can run before your test code throws an exception. ''' try: func(*args, **kwargs) except etype as e: return _check(True, 'EXCEPT(%s)' % _code()) except: _check(False, 'EXCEPT(%s)' % _code()) raise else: return _check(False, 'EXCEPT(%s)' % _code()) wvstart = WVSTART wvmsg = WVMSG wvpass = WVPASS wvfail = WVFAIL wvpasseq = WVPASSEQ wvpassne = WVPASSNE wvpaslt = WVPASSLT wvpassle = WVPASSLE wvpassgt = WVPASSGT wvpassge = WVPASSGE wvexcept = WVEXCEPT def wvfailure_count(): return _fails def _check_unfinished(): if _registered: for func in _registered: print 'WARNING: not run: %r' % (func,) WVFAIL('wvtest_main() not called') if _fails: sys.exit(1) atexit.register(_check_unfinished) def _run_in_chdir(path, func, *args, **kwargs): oldwd = os.getcwd() oldpath = sys.path try: os.chdir(path) sys.path += [path, os.path.split(path)[0]] return func(*args, **kwargs) finally: os.chdir(oldwd) sys.path = oldpath if sys.version_info >= (2,6,0): _relpath = os.path.relpath; else: # Implementation for Python 2.5, taken from CPython (tag v2.6, # file Lib/posixpath.py, hg-commit 95fff5a6a276). Update # ./LICENSE When this code is eventually removed. def _relpath(path, start=os.path.curdir): if not path: raise ValueError("no path specified") start_list = os.path.abspath(start).split(os.path.sep) path_list = os.path.abspath(path).split(os.path.sep) # Work out how much of the filepath is shared by start and path. i = len(os.path.commonprefix([start_list, path_list])) rel_list = [os.path.pardir] * (len(start_list)-i) + path_list[i:] if not rel_list: return curdir return os.path.join(*rel_list) def _runtest(fname, f): mod = inspect.getmodule(f) relpath = _relpath(mod.__file__, os.getcwd()).replace('.pyc', '.py') print print 'Testing "%s" in %s:' % (fname, relpath) sys.stdout.flush() try: _run_in_chdir(os.path.split(mod.__file__)[0], f) except Exception as e: print print traceback.format_exc() tb = sys.exc_info()[2] wvtest._result(e, traceback.extract_tb(tb)[1], 'EXCEPTION') def _run_registered_tests(): import wvtest as _wvtestmod while _wvtestmod._registered: t = _wvtestmod._registered.pop(0) _runtest(t.func_name, t) print def wvtest_main(extra_testfiles=tuple()): import wvtest as _wvtestmod _run_registered_tests() for modname in extra_testfiles: if not os.path.exists(modname): print 'Skipping: %s' % modname continue if modname.endswith('.py'): modname = modname[:-3] print 'Importing: %s' % modname path, mod = os.path.split(os.path.abspath(modname)) nicename = modname.replace(os.path.sep, '.') while nicename.startswith('.'): nicename = modname[1:] _run_in_chdir(path, __import__, nicename, None, None, []) _run_registered_tests() print print 'WvTest: %d tests, %d failures.' % (_wvtestmod._tests, _wvtestmod._fails) if __name__ == '__main__': import wvtest as _wvtestmod sys.modules['wvtest'] = _wvtestmod sys.modules['wvtest.wvtest'] = _wvtestmod wvtest = _wvtestmod wvtest_main(sys.argv[1:]) bup-0.29/wvtest.sh000077500000000000000000000044051303127641400141310ustar00rootroot00000000000000# # Include this file in your shell script by using: # #!/bin/sh # . ./wvtest.sh # # we don't quote $TEXT in case it contains newlines; newlines # aren't allowed in test output. However, we set -f so that # at least shell glob characters aren't processed. _wvtextclean() { ( set -f; echo $* ) } if [ -n "$BASH_VERSION" ]; then . ./wvtest-bash.sh # This keeps sh from choking on the syntax. else _wvbacktrace() { true; } _wvpushcall() { true; } _wvpopcall() { true; } _wvfind_caller() { WVCALLER_FILE="unknown" WVCALLER_LINE=0 } fi _wvcheck() { local CODE="$1" local TEXT=$(_wvtextclean "$2") local OK=ok if [ "$CODE" -ne 0 ]; then OK=FAILED fi echo "! $WVCALLER_FILE:$WVCALLER_LINE $TEXT $OK" >&2 if [ "$CODE" -ne 0 ]; then _wvbacktrace exit $CODE else return 0 fi } WVPASS() { local TEXT="$*" _wvpushcall "$@" _wvfind_caller if "$@"; then _wvpopcall _wvcheck 0 "$TEXT" return 0 else _wvcheck 1 "$TEXT" # NOTREACHED return 1 fi } WVFAIL() { local TEXT="$*" _wvpushcall "$@" _wvfind_caller if "$@"; then _wvcheck 1 "NOT($TEXT)" # NOTREACHED return 1 else _wvcheck 0 "NOT($TEXT)" _wvpopcall return 0 fi } _wvgetrv() { ( "$@" >&2 ) echo -n $? } WVPASSEQ() { _wvpushcall "$@" _wvfind_caller _wvcheck $(_wvgetrv [ "$#" -eq 2 ]) "exactly 2 arguments" echo "Comparing:" >&2 echo "$1" >&2 echo "--" >&2 echo "$2" >&2 _wvcheck $(_wvgetrv [ "$1" = "$2" ]) "'$1' = '$2'" _wvpopcall } WVPASSNE() { _wvpushcall "$@" _wvfind_caller _wvcheck $(_wvgetrv [ "$#" -eq 2 ]) "exactly 2 arguments" echo "Comparing:" >&2 echo "$1" >&2 echo "--" >&2 echo "$2" >&2 _wvcheck $(_wvgetrv [ "$1" != "$2" ]) "'$1' != '$2'" _wvpopcall } WVPASSRC() { local RC=$? _wvpushcall "$@" _wvfind_caller _wvcheck $(_wvgetrv [ $RC -eq 0 ]) "return code($RC) == 0" _wvpopcall } WVFAILRC() { local RC=$? _wvpushcall "$@" _wvfind_caller _wvcheck $(_wvgetrv [ $RC -ne 0 ]) "return code($RC) != 0" _wvpopcall } WVSTART() { echo >&2 _wvfind_caller echo "Testing \"$*\" in $WVCALLER_FILE:" >&2 } WVDIE() { local TEXT=$(_wvtextclean "$@") _wvpushcall "$@" _wvfind_caller echo "! $WVCALLER_FILE:$WVCALLER_LINE $TEXT FAILED" 1>&2 exit 1 } # Local Variables: # indent-tabs-mode: t # sh-basic-offset: 8 # End: