pax_global_header00006660000000000000000000000064135175511650014523gustar00rootroot0000000000000052 comment=6901600a74765f7c73bf8614917962ce5858cae0 dstat-0.7.4/000077500000000000000000000000001351755116500126525ustar00rootroot00000000000000dstat-0.7.4/.github/000077500000000000000000000000001351755116500142125ustar00rootroot00000000000000dstat-0.7.4/.github/ISSUE_TEMPLATE.md000066400000000000000000000026751351755116500167310ustar00rootroot00000000000000 ##### SUMMARY ##### ISSUE TYPE - Bug Report - Feature Idea - Documentation Report ##### DSTAT VERSION ``` ``` ##### OS / ENVIRONMENT ##### STEPS TO REPRODUCE ``` ``` ##### EXPECTED RESULTS ##### ACTUAL RESULTS ``` ``` dstat-0.7.4/.github/PULL_REQUEST_TEMPLATE.md000066400000000000000000000011341351755116500200120ustar00rootroot00000000000000##### ISSUE TYPE - New plugin pull-request - Feature pull-request - Bugfix pull-request - Docs pull-request ##### DSTAT VERSION ``` ``` ##### SUMMARY ``` ``` dstat-0.7.4/.gitignore000066400000000000000000000000051351755116500146350ustar00rootroot00000000000000*.pycdstat-0.7.4/.travis.yml000066400000000000000000000003621351755116500147640ustar00rootroot00000000000000sudo: false language: python python: - "2.6" - "2.7" - "3.5" - "3.6" #install: # - pip install dbus-python # - pip install python-utmp script: - python ./dstat --version - python ./dstat -taf 1 5 - python ./dstat -t --all-plugins 1 5 dstat-0.7.4/AUTHORS000066400000000000000000000000341351755116500137170ustar00rootroot00000000000000Dag Wieers dstat-0.7.4/COPYING000066400000000000000000000432541351755116500137150ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. dstat-0.7.4/ChangeLog000066400000000000000000000477601351755116500144420ustar00rootroot00000000000000* 0.8.0 - To be released - Added Python 3 support * 0.7.3 - Like a Phoenix from the ashes - release 2016-03-17 - Provide kernel configuration options to error mesages where possible - Added external dstat_md_dstat plugin to show the resync progress of SWRAID (Bert de Bruijn) - Changed color of 100% to white to make it stand out (Bert de Bruijn) - Added new --bits option to force bit-values on screen (Scott Baker) - Fix to allow internal plugins to use underscores/dashes too - Improve internal dstat_vm plugin to use globs for matching/adding counters - Added internal dstat_vm_adv plugin to show advanced VM counters (Pallai Roland) - Added internal dstat_zones plugin to show zoneinfo counters (Pallai Roland) - Fix warning message when colors are disabled because TERM is not found (Ulp 660181) - Fix typo in dstat_nfs3_ops and dstat_nfsd3_ops (Chris Larsson) - Added external dstat_mem_adv plugin so show advanced memory counters (Damon Snyder) - Allow more variables (self.vars) than nicknames (self.nick) to simplify plugins - Using -f/--full does not impact -c/--cpu anymore, most systems ship with 2 or more CPUs - Added counter 'parent' when printing CSV subtitles of list-counters (Michael Boutillier) - Print decimal values as decimals where possible (so 0 instead of 0.0) - Added external dstat_ddwrt_* plugins using DD-WRT counters using SNMP infrastructure - Fixed improper process names using spaces instead of \0 (Edward Smith) - Added --cpu-use plugin with onlu CPU usage per CPU (Christian Neukirchen) * 0.7.2 - Real soon now - release 2010-06-15 - Added external dstat_disk_tps plugin to show transactions per second - Added support for filtering /dev/vdaX devices (KVM virtio) - Added external dstat_squid plugin to show squid counters (Jason Friedland) - Introduced blockdevices() to list available blockdevices - Added support for CCISS block devices (named cciss/c0d0) - Introduced cmd_test() to verify command and options - Introduced cmd_readlines() to read from command output - Introduced cmd_splitlines() to split lines read from command output - Implement best effort /proc integer overflow handling in dstat_net (Ross Brattain) - Added external dstat_dstat_cpu plugin to show dstat's cpu usage - Added external dstat_dstat_ctxt plugin to show dstat's context switches - Added external dstat_dstat_mem plugin to show dstat's memory usage - Added external dstat_top_bio_adv plugin to show advanced top I/O usage - Added external dstat_top_cpu_adv plugin to show advanced top cpu usage - Added external dstat_top_io_adv plugin to show advanced top block I/O usage - Allow specifying separator for splitline() and splitlines() functions - Make top-plugins free memory for processes that no longer exist - Added external dstat_top_int plugin to show most frequent interrupt by name - Fixed python 2.6 string exception issue (Dbt 585219) - Documentation improvements * 0.7.1 - Just the three of us - release 2010-02-22 - Fix external plugins on python 2.2 and older (eg. RHEL3) - Documentation improvements - Implement linecache for top-plugins (caching for statistics) - Added external dstat_qmail plugin to show the length of qmail queues (Tom Van Looy) - Added external dstat_dstat plugin to show Dstat's own cputime and latency values - Added --profile option to get profiling statistics when you exit dstat - Show a message with the default options when no stats are being specified - Improved page allocation numbers in vm plugin (Hirofumi Ogawa) - Introduced proc_readline() and proc_spitline() using linecache for top-plugins - Introduced proc_readlines() and proc_splitlines() using linecache for top-plugins - Introduced proc_pidlist() for top-plugins - New tchg() function to format the time depending on width * 0.7.0 - Tokyo - release 2009-11-25 - Fixed dstat_disk plugin for total calculation on 2.6.25+ kernels (Noel J. Bergman) - Precompile regular expressions used as a disk filter (self.diskfilter) - Raise a warning when discovery returns empty - Improvements to dstat_battery and dstat_cpufreq - Added external dstat_power plugin to show ACPI power usage - Simplified logic inside dstat_time - Added external dstat_ntp plugin to show time from an NTP server - Flush sys.stdout later in the loop - Filtering out more interfaces (eg. bonding) in total values (Bert de Bruijn) - Provide error output when now power information is available (AC power) - Make topcpu plugin SMP aware (values are not percent per CPU) - Drop support for Python 1.5 (and removed dstat15 version) - Introduced splitlines() function that allows a replace/split on readlines() - Added external dstat_fan plugin to show fan speed - Added theming support (not exposed to users yet) - Added --bw/--blackonwhite option to use dark colors on white background - Allow any plugin to be added by using long options (ie. --topbio) - Added external dstat_memcache_hits plugin to show memcache hits and misses (Dean Wilson) - Various changes to simplify plugin interface and performance improvements - Added external dstat_proccount plugin to show total number of process (Dean Wilson) - Added external dstat_vzio plugin to show I/O account number per OpenVZ container - Added external dstat_battery_remain plugin to show how much battery time is remaining - Added getnamebypid() function to simplify finding best process name by process id - Added external dstat_toplatency plugin to show process with top wait time in milliseconds - Added external dstat_toplatency_avg plugin to show process with top average wait time in milliseconds - Added external dstat_topcputime plugin to show process with top total cpu in milliseconds - Added external dstat_topcputime_avg plugin to show process with top average timeslice in milliseconds - Improvements to timing when writing to screen so that it feels nicer - Added external dstat_disk_util to show per disk utilization rates in percentage - Added new --float option to force float values on screen - Reduce the number of paths used for importing modules (CVE-2009-3894) - Mass rename plugins to follow better convention (impacts existing options) - This release was 'sponsored' by the Linux Foundation during the Japan Linux Symposium * 0.6.9 - Locarno - release 2008-12-02 - Input text color is now gray (again) - Added external dstat_lustre plugin (Brock Palen, Kilian Vavalotti) - Validate integer values in /proc/swaps (Bert de Bruijn) - Added VMware guest dstat_vmmemctl plugin (Bert de Bruijn) - Added internal dstat_fs plugin showing number of open files/inodes - Added internal dstat_socket plugin to show total number of various sockets - Added internal dstat_aio plugin to see number of asynchronous I/O requests - Listing modules (-M list) now also lists internal plugins - Added internal dstat_vm plugin showing page faults, allocations and frees - Added internal dstat_io plugin showing number of completed I/O requests * 0.6.8 - Buenos Aires - release 2008-09-12 - Added improved tick patch (Kelly Long) - Show milliseconds in dstat_time when using --debug cfr. dstat_epoch - Difference in integer rounding should not affect colouring - Fixed the IOError when terminal is suspended and IO is unbuffered (Dbt 309953) - Scheduler accuracy improvements by using sched instead of signal - Added external dstat_snooze plugin (Kelly Long) - Improved dstat_time to accept format string from DSTAT_TIMEFMT (Kelly Long) - Added --pidfile option to write out a pidfile (Kelly Long) - dstat_epoch and dstat_time now display starttime, not execution time of plugin - Fix division by zero problem - Warn when loosing ticks (buffering problems or vmware time sync errors) - Fixed permissions of plugins (Andrew Pollock) - Fixed exception when specifying -I eth0 (Radek Brich) - dstat_int plugin now allows -I total (Radek Brich) - Fixed typo in dstat_topio and dstat_topbio when using CSV output (Bharvani Toshaan) - Added external dstat_net_packets plugin to show the number of packets per interface - Default to 25/80 when terminal height/width is set to zero, eg. XEmacs shell (Jeff Mincy) - Removed complex process name since /proc/pid/cmdline behaves different on new kernels (Adrian Alves) * 0.6.7 - Cambridge overdue - released 2008-02-26 - Only rewrite xterm title when XTERM_SHELL is set to bash - Added more Dbt (Debian bug tracker) ids in the ChangeLog and TODO - Use sys.exit() instead of exit() before color support is detected - Renamed external dstat_app plugin to dstat_topcpu - Added external dstat_topmem plugin - Improved dstat_topcpu CSV output - Fixed a problem with asciidoc DocBook output (Dbt 427214, Michael Ablassmeier) - Report when python-curses is missing and colors don't work (eg. on OpenSUSE) - Improve --version output wrt. terminal and color support - Fixed a few inaccuracies in the man page (John Goggan) - Fixed opening vanished files in /proc in dstat_topcpu - Fixed formatting bug in dstat_topcpu - Added external dstat_mysql_* and dstat_innodb_* plugins - Added greppipe() and matchpipe() to improve performance on pipe-handling - Added external dstat_topio and dstat_topbio plugins - Added external dstat_topoom plugin to show top out-of-memory score - Added external dstat_mysql5_* plugins (Frederic Descamps) - Reinstated the use of -D md0 which got lost (Peter Rabbitson) - Improvement to cpufreq module for SMP systems (Bert de Bruijn) - Added VMware ESX dstat_vmknic plugin (Bert de Bruijn) - Added infrastructure to allow C plugins * 0.6.6 - Unemployed - released 2007-04-28 - Removed SwapCached from the Cached counter (Dbt 418326, Peter Rabbitson) - Fixed a file descriptor problem on kernel 2.4 (Liviu Daia) - Install manpage as part of the make install phase (Scott Baker) - Use SIG_IGN instead of SIG_DFL to disable alarm signal - Improved dev() for kernel 2.4 device names (Dbt 377199, Filippo Giunchedi) - If stdout is not a TTY, don't limit the line length (Jason) * 0.6.5 - Torrox - released 2007-04-17 - Added VMware ESX plugins (Bert de Bruijn) - Added tcp6 and udp6 statistics within dstat_tcp and dstat_udp - Added module readlines() taking care of seek() and multiple files - Improved module exception messages - Fixed a problem with strings and CSV output in dstat_time and dstat_app (Vinod Kutty) - Removed broken dstat_clock plugin (use dstat_time or dstat_epoch) - Disabled the generic exception handling of OSError and IOError to force a stacktrace (supastuff@freenode) * 0.6.4 - Ahoy - released 2006-12-12 - Fixed dstat_clock to use localtime() instead of gmtime() - Added external plugin dstat_vz for openvz cpu statistics - Removed the underscoring of the counter titles - Added underlining for the counter titles - Do not return md and dm devices during disk discovery - Renamed dstat_time plugin to dstat_epoch (-T/--epoch) - Moved dstat_clock plugin into main dstat program as dstat_time (-t/--time) * 0.6.3 - Amsterdam - released 2006-06-26 - Changed default (silver) color of delimiter to gray - Fixed sum() and enumerate() only when it isn't there (Jesse Young) - Added external plugins dstat_app - Added ibm-acpi support to dstat_thermal - Exclude md-devices from total - Improved debug output somewhat - Fixed a battery plugin bug (Christophe Vandeplas) - Added individual swap monitoring (-s with -S) - Small performance improvements - Raise module exceptions when --debug is invoked - Removed the memory-leaking curses implementation (Dbt 336380, supastuff@freenode) - Added dist, rpm and srpm targets to Makefile - Moved documentation to asciidoc at last * 0.6.2 - Cumbernauld - released 2006-03-08 - Fixed situation where no TERM environment variable was set (William Webber) - Print out terminal debug info (TERM env and terminal type) - Added SwapCached value to Cached (Bert de Bruijn) - Added external plugin dstat_clock, a human-readable alternative for dstat_time - Fixed problem with Broken pipe when doing eg. dstat | head -1 (Eike Herzbach) * 0.6.1 - Fishkill - released 2005-09-05 - Look for plugins in ~/.dstat/ too - Added '-M list' to show the list of available modules per path - Fixed a bug in dpopen causing gpfs/gpfsop to fail after a while - Change terminal title (if terminal supports it) - Don't trim the cpulist to 2 items when -f (Sbastien Prud'homme) - Exclude md-devices from total (Dbt 318950, Peter Cordes) - Now accept 'total' keyword with -C (like -D and -N) - Rewrote the path-inserting code - Added asciidoc based manual page - Added external plugin dstat_rpc - Added external plugin dstat_rpcd - Added external plugin dstat_nfs3 - Added external plugin dstat_nfs3op - Added external plugin dstat_nfsd3 - Added external plugin dstat_nfsd3op - Improvements to plugin error handling * 0.6.0 - Bettiesbaai - released 2005-05-29 - Removed keyboard input prevention patch. (Dbt 304673, Marc Lehmann) - Fixed bug with: dstat -tit -I 177 - Added ipc stats (--ipc) - Added lock stats (--lock) - Added raw stats (--raw) - Added unix stats (--unix) - Improved udp stats - Reimplemented -I eth0,ide1 (Bert de Bruijn) - Smarter /proc handling, seek(0) instead of re-open() - Implemented dopen() as a wrapper hash for file descriptors - Small speedup improvements after profiling - Improvement in handling compatible stats (eg. disk, disk24, disk24old) - Added initial values (step=0) for disk, int, page, and sys stats - Allowed external tools to use the dstat classes - Added example scripts using the dstat classes (mstat.py and read.py) - Allowed to interface with external plugins - Added external acpi plugin dstat_battery - Added external acpi plugin dstat_cpufreq - Added external acpi plugin dstat_thermal - Added external app plugin dstat_postfix - Added external app plugin dstat_sendmail - Added external app plugin dstat_gpfs - Added external app plugin dstat_gpfsop - Added external plugin dstat_dbus - Added external plugin dstat_freespace - Added external plugin dstat_utmp - Added external plugin dstat_wifi - Removed user stat (now in external dstat_utmp plugin) - Smaller fixes and overall improvements - Improved help output and manpage - Added README.examples, README.performance and README.plugins - Added profiling/debuging code (--debug) - Rewrote cprint/cprintlist logic - Get rid of python-curses requirement for SLES9 (although it helps to have it) - Fixed dstat_disk24old for newer 2.4 kernels without CONFIG_BLK_STATS (Susan G. Kleinmann) - Improved dstat_disk24 for newer 2.4 kernels with CONFIG_BLK_STATS (Susan G. Kleinmann) - Allow for specifying compatible stats on command line (eg. -M disk24,disk24old,page24) - Make time stat more detailed when --debug is used. - Implemented infrastructure to pipe to commands - Started collecting different proc-files for debugging - Disable headers if less than 6 lines in terminal * 0.5.10 - released 2005-04-08 - Small fix to restore terminal for all exit paths (Dbt 303526, Modesto Alexandre) - Get rid of duplicate 'screen width too small' error message in dstat15 * 0.5.9 - released 2005-03-28 - Make default list total lists (cpu, disk, net) - Fix clearline ANSI to work on older (Debian?) rxvt (Joshua Rodman) - Improved color/vt100 terminal capabilities logic (Dbt 300288, Charles Lepple) - Finally use curses for some of the terminal capabilities logic - Improvement to non-tty handling for intermediate updates - Small fix to handle the edge of the counters better - Prevent keyboard input/echo when running * 0.5.8 - released 2005-03-15 - Added user stats (-u), using python-utmp - Bail out if all requested stats fail - Replaced --noheader option by --noheaders option (like vmstat) - Added -V as short for --version - Improved help output - Allow CSV output and human output concurrently - Removed --csv option (now use --output option) - Added gnome to known ANSI capable terminal emulation - Replaced save and restore ANSI to save and restore VT100 (Olav Vitters) - Backported dstat to python 1.5 again * 0.5.7 - released 2004-12-31 - Change Makefile to not install when run without target (Kurt Roeckx) - Fixed another crash caused by /proc instability - Added --csv option to output Comma-Seperated-Value output - If output is not a tty, don't care about line-width * 0.5.6 - released 2004-12-20 - Made sys and int stats unit-aware (so 10000 int/sec -> 10.0k) (Anton Blanchard) - Improve conv() function and stat show() functions - Improved the calculation of the cpu usage - cpu stats will now show hardirq and softirq by default if possible (Anton Blanchard) - Color cpu, proc, tcp and udp stats too - Don't clear the line after restoring the cursor at the start (disable flickering) - Better formatting for load and proc stats - cpu stats are not longer snapshots but average over delay - Fix for diskless systems (Werner Augustin) - Gracefully handle incorrect arguments - Important changes to header-model - Added smp support (Bert de Bruijn) - proc stats now show averages - Check if output is a tty, else disable colors and updates - Fixed bug in interrupt stats on smp systems (Bert de Bruijn) - Improved interrupt stats (Bert de Bruijn) - Improvement in output, 10.0k or 5.0 will be displayed simply as 10k or 5 - proc stats now show floats * 0.5.5 - released 2004-02-12 - In fact, round() was not the problem, use str() instead. (Anton Blanchard) - Abandoned the use of round() as it is limited to integers (Juergen Kreileder) * 0.5.4 - released 2004-10-25 - Added a python 1.5 version of dstat (Ville Herva) - Fixed a problem with count - Improved the logic for displaying repetitive headers - Now --nocolor implies --noupdate (since it implies no ANSI escape sequences) - Removed the 'Exiting on user request' message * 0.5.3 - released 2004-10-21 - Added -M or --mods option to allow modules - Added --full option to expand the -D, -I and -N discovery lists - Re-added the number of new processes (the --vmstat will no longer resemble vmstat) - More intelligent way of ordering stats to fit as much in screen width as possible - Fixed a crash when counters overflow (Francois Postaire) - Added manpage, kindly donated by Andrew Pollock - Added --tcp and --udp stats (may be improved later ?) - Fixes to disk24old and new cpu24 (for Debian 2.4.26 kernel) - Signal handling cleanup - Partitions are excluded from discovery on 2.4 kernels * 0.5.2 - released 2004-10-13 - Improved disk and net discovery functions (Ville Herva) - Fixed a bug with values when using --noupdate (Pasi Pirhonen) - Documented the internals a bit more, hoping people will contribute - Implemented a fix for when the output exceeds terminal columns * 0.5.1 - released 2004-10-11 - Fixed bug that caused counters to not be averages when delay > 1 - Added time stats (-t) * 0.5 - released 2004-10-11 - Changed some more int()'s into long()'s (Pasi Pirhonen) - Fixed the cpu out of index, /proc instability (Pasi Pirhonen) - Improved the rounding function - Added --integer, to get earlier 'integer value' behaviour - Added --noheader option to only see header at start - Unbuffered sys.stdout and added ANSI colors - Added --nocolor to disable newly introduced colors - Added --noupdate to disable intermediate updates when delay > 1 - When counters roll over, show dash - Fixed 2 crash bugs caused by /proc instability * 0.4 - released 2004-10-26 - Added interrupt stats (-i) - Order of the stats adhere the order of the options - Interval more precise, using signals instead of sleep - Modular rewrite using classes - Added -D, -I and -N options to customize list - Allow to specify 'total' for -D and -N to get aggregated numbers - Added --vmstat option, vmstat alike output - Implemented a basic network, interrupt and disk 'discovery' function - Replaced hardcoded 4096 by resource.getpagesize() - Added enumerate() for python < 2.3, and rewrote/removed it again - Check for support of proc filesystem and entries - Fixes for kernel 2.4 support (disk and paging) - Count number of CPUs (for kernel 2.4 disk support) - Titles are now truncated to max-1 - Show header when it disappears from screen - Allow to specify interrupt by device eg. -I eth0,acpi or -I ide0,yenta - Fix disk stats bug related to RHEL3 U3 iostat bug on 2.4 (RHbz 137595, Charlie Bennett) - Uncommented old 2.4 disk stats functionality (see source for enabling it) - Initial public release * 0.3 - Added load stats (-l) - Added memory stats (-m) - Output now fits into space and adds unit - Converted all values to bytes * 0.2 - Added disk io stats (-d) - Added proc stats (-p) - Important layout changes * 0.1 - Initial release dstat-0.7.4/LINKS000066400000000000000000000005671351755116500134650ustar00rootroot00000000000000Terminal emulation http://www.termsys.demon.co.uk/vtansi.htm http://vt100.net/docs/vt100-ug/chapter3.html#DECSC NFS http://www.hn.edu.cn/book/NetWork/NetworkingBookshelf_2ndEd/nfs/ch14_02.htm MySQL performance counters http://www.mysql.com/news-and-events/newsletter/2004-01/a0000000301.html Kernel schedstat http://eaglet.rain.com/rick/linux/schedstat/v10/format-10.html dstat-0.7.4/Makefile000066400000000000000000000035151351755116500143160ustar00rootroot00000000000000name = dstat version = $(shell awk '/^Version: / {print $$2}' $(name).spec) prefix = /usr sysconfdir = /etc bindir = $(prefix)/bin datadir = $(prefix)/share mandir = $(datadir)/man .PHONY: all install docs clean all: docs @echo "Nothing to be build." docs: $(MAKE) -C docs docs install: # -[ ! -f $(DESTDIR)$(sysconfdir)/dstat.conf ] && install -D -m0644 dstat.conf $(DESTDIR)$(sysconfdir)/dstat.conf install -Dp -m0755 dstat $(DESTDIR)$(bindir)/dstat install -d -m0755 $(DESTDIR)$(datadir)/dstat/ install -Dp -m0755 dstat $(DESTDIR)$(datadir)/dstat/dstat.py install -Dp -m0644 plugins/dstat_*.py $(DESTDIR)$(datadir)/dstat/ # install -d -m0755 $(DESTDIR)$(datadir)/dstat/examples/ # install -Dp -m0755 examples/*.py $(DESTDIR)$(datadir)/dstat/examples/ install -Dp -m0644 docs/dstat.1 $(DESTDIR)$(mandir)/man1/dstat.1 docs-install: $(MAKE) -C docs install clean: rm -f examples/*.pyc plugins/*.pyc $(MAKE) -C docs clean test: ./dstat --version ./dstat -taf 1 5 ./dstat -t --all-plugins 1 5 dist: clean $(MAKE) -C docs dist # svn up && svn list -R | pax -d -w -x ustar -s ,^,$(name)-$(version)/, | bzip2 >../$(name)-$(version).tar.bz2 # svn st -v --xml | \ xmlstarlet sel -t -m "/status/target/entry" -s A:T:U '@path' -i "wc-status[@revision]" -v "@path" -n | \ pax -d -w -x ustar -s ,^,$(name)-$(version)/, | \ bzip2 >../$(name)-$(version).tar.bz2 git ls-files | pax -d -w -x ustar -s ,^,$(name)-$(version)/, | bzip2 >../$(name)-$(version).tar.bz2 rpm: dist rpmbuild -tb --clean --rmspec --define "_rpmfilename %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm" --define "_rpmdir ../" ../$(name)-$(version).tar.bz2 srpm: dist rpmbuild -ts --clean --rmspec --define "_rpmfilename %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm" --define "_srcrpmdir ../" ../$(name)-$(version).tar.bz2 snap: cd packaging/snap/; snapcraft dstat-0.7.4/README.adoc000066400000000000000000000032341351755116500144410ustar00rootroot00000000000000= DSTAT image:https://travis-ci.org/rear/rear.svg?branch=master["Build Status", link="https://travis-ci.org/dagwieers/dstat"] Dstat is a versatile replacement for vmstat, iostat, mpstat, netstat and ifstat. Dstat overcomes some of their limitations and while also adding extra features, more counters and flexibility. Dstat is handy for monitoring systems during performance tuning tests, benchmarks or troubleshooting. Dstat allows you to view all of your system resources instantly. For example you can compare disk usage in combination with interrupts from your IDE controller, or compare network bandwidth directly with disk throughput. Dstat gives you detailed selective information in columns and clearly indicates the magnitude and unit the output is displayed. Less confusion, less mistakes. Dstat is unique in letting you aggregate block device throughput for a certain diskset or networkset. This allows you to see the throughput for all the block devices that make up a single filesystem or storage system. Dstat is extensible! You can write your own Dstat plugins to monitor whatever you like in just a few minutes based on provided examples and a little bit of Python knowledge. Dstat's output by default is designed for human consumption in real-time. Dstat also allows CSV output so you to archive historical data in a file to be imported later into a spreadsheet. This is useful for generating graphs. Since it's practically impossible to test dstat on every possible permutation of kernel, python or distribution version, we need your help and feedback in testing. If you have improvements or bug reports, please send them to: http://github.com/dagwieers/dstat dstat-0.7.4/TODO000066400000000000000000000100261351755116500133410ustar00rootroot00000000000000### Disclaimer This is my TODO list. If you're interested in one of these features, there are 2 options. Either wait for someone to implement it or sponsor someone to implement it. If you want to implement something, please contact me first so that we can discuss acceptable implementations. In some cases I haven't thought about it too deeply, but for others I know exactly what I require. If you have other nice ideas that you think would be an improvement, please contact me as well. :) Send an email to: Dag Wieers ### Usability + Add --config option and use /etc/dstat.conf and ~/.dstat to influence output (see example dstat.conf) + Allow to force to given magnitude (--unit=kilo) + Look at possibilities to show deviation (on second line ? not practical) + Check for dark/light background color and change colors accordingly (option --bw/--blackonwhite) + Show parts of counters in other colors (eg. color the 6 in 6134B in yellow to indicate it's kilobyte) + Look into adding sched_setscheduler() calls for improved priority ### General improvements + Implement better (?) protection against counter rollovers (see mail from Sebastien Prud'homme/Ross Brattain, already improved in meantime) ### Documentation (help welcome!) + Document every plugin as part of python comments (explain unit, what it means etc...) + Create document on general system performance tuning (explaining the different values in /proc, especially the concerning ones) + Create document on general system performance tools (explaining the different uses of tools like dstat, iostat, pmap, strace, tcpdump) + Comply to PEP8: http://www.python.org/dev/peps/pep-0008/ ### Export/Graph + Interface with rrdtool (python-rrd ?) + Allow for different types of export modules (only CSV now) - ODS could include graphs for plugins ! - HTML output plugin helps for people sharing output on websites + Allow to write out to syslog (or remote syslog) + Allow to write buffered to disk (optional ?) + Write out user input to CSV ### Plugin improvements + Don't calculate counters twice when a plugin is loaded twice ### Extending statistics (help welcome!) + Add %steal, %guest to --cpu-adv plugin + Add slab plugin (see /proc/slabinfo and slabtop) + Add xorg plugin (xdpyinfo, xrestop) - Add 'most expensive X app' (look at xrestop) - Add number of (active) X sessions and X clients + Add icmp plugin ? + Add application plugin (-a or -A pid,cmd) + Add user plugin (number of users logged on, utmp is not that useful, /proc/key-users) + Look into interfacing with apps - amavisd, apache, bind, cifs, dhcpd, dnsmasq, gfs, samba, squid + Look into interfacing with specific HW counters in /proc - qla2300 + Look at /proc/meminfo, /proc/mdstat, /proc/net/netstat, /proc/net/snmp, /proc/vmstat, /proc/drbd + Look at /proc/fs/cifs/stats + Add i2c plugin (see /sys/class/i2c-adapter/i2c-*/*/*/*/*/*) + Allow for SNMP counters to be added + Add LVM stats + Allow to have multiple '1st expensive ... app' and '2nd expensive ... app' + Add 'most iowaiting app' plugin + Add systemtap/perf integration + Add dropwatch statistics ### Plugin issues + plugins that use /proc/pid/stats are reasonably slow (implement in C might help) + disk plugin: /proc/partitions can have negative numbers, seen on systems with long uptime. dstat handles this except for calculating the very first stat, no work-around possible? + proc plugin: (run and blk) does not work on 2.4.24+ (to be confirmed ?) + tcp plugin: is very slow and generates lots of softirqs (on busy systems), to be confirmed ### Redesign (v1.0) + Create a nicer interface for plugins (with meaningful names, eg. not nick) ### Redesign (v2.0) + Create modules that can contain samples of different units CPU: (see mpstat) sys, usr, idl, iow, hiq, siq (percentage) intr/sec (int) IO: (see iostat -x) tps (int) blk_read/sec, blk_wrtn/sec (kB/sec) + Design proper object model and namespace for _all_ possible stats + Create a seperate curses-based tool, much like nmon (dstat stays line-based) + Create client/server monitoring tool dstat-0.7.4/docs/000077500000000000000000000000001351755116500136025ustar00rootroot00000000000000dstat-0.7.4/docs/Makefile000066400000000000000000000015421351755116500152440ustar00rootroot00000000000000prefix = /usr datadir = $(prefix)/share mandir = $(datadir)/man template = dagit.ott adoctargets = $(shell echo *.adoc) htmltargets = $(patsubst %.adoc, %.html, $(adoctargets)) all: dist: docs docs: dstat.1 $(htmltargets) install: dstat.1 install -Dp -m0644 dstat.1 $(DESTDIR)$(mandir)/man1/dstat.1 clean: rm -f dstat.1 *.html *.xml %.1.html: %.1.adoc asciidoc -d manpage $< %.html: %.adoc asciidoc $< %.1.xml: %.1.adoc asciidoc -b docbook -d manpage $< %.1: %.1.xml @xmlto man $< %.xml: %.adoc asciidoc -b docbook -d article -o $@ $< %.htm: %.adoc asciidoc -s -b html4 -d article -o $@ $< %.xhtml: %.adoc asciidoc -s -b xhtml11 -d article -o $@ $< %.tmp.odt: %.xml #»··-make -C /home/dag/home-made/docbook2odf/ dag-cv docbook2odf -f --params generate.meta=0 -o $@ $< %.odt: $(template) %.tmp.odt unoconv -f odt -t $(template) -o $@ $< dstat-0.7.4/docs/counter-rollovers.adoc000066400000000000000000000051011351755116500201330ustar00rootroot00000000000000= All you ever wanted to know about counter-rollovers in Dstat == What you need to know about counter rollovers Unfortunately, Dstat is susceptible for counter rollovers, which may give you bogus performance output. Linux currently implements counters as 32bit values (not sure on 64bit platforms). This means a counter can go up to 2^32 (= 4294967296 = 4G) values. Especially for network devices (which are calculated in bytes) this is too much as it means every 4GB, the counter is reset to 0. On a 1Gbps interface that is fully used, this happens every 32 seconds. On 2 bonded 10Gbps interfaces, this happens after 1.6 seconds. Since /proc is updated every second, this becomes almost impossible to catch. == How does this impact Dstat ? Currently Dstat has a problem if you specify delays that are too big. I.e. using 60 or 120 seconds delay in Dstat will make Dstat check these counters only once per minute or every two minutes. In case a counter rolls over, it may be lower than the previous value, or worse, the value may actually be higher. In the first case we compensate because we now a rollover (at least one) happened, but if in the interval more than one rollover happened, you're screwed. If however the rollover causes a higher value than the previous in that interval, you're screwed too :-) In both situations that you're screwed, we cannot help you either because we don't know. So we cannot compensate or even notify the user of the problem. This is very problematic, and it's important you are aware of this. == What are the solutions ? The only fix for Dstat is to check more often than the specified delay. Unfortunately, this requires a re-design (or an ugly hack). But if rollovers happen more than once (or values are larger than the max value) we cannot fix this. There are plans to use 64bit counters on Linux and/or changing the output from using bytes to kbytes. None of this is sure. (add pointers to threads) == What can I do ? Since this is Open Source, you are free to fix this and send me the fix. Or help with a redesign of Dstat to overcome this problem. Also look at the TODO file to see what other changes are expected in a redesign of Dstat. Since I have a lot of other responsibilities and am currently not using Dstat for something where this problem matters much, I will have no time to look at it closely (unless the fix or the redesign is made fairly simple). It all depends on how quick I think I can fix/redesign it and how much time I have. Your help could be to reduce the time it takes for me to fix it :) NOTE: Please send me improvements to this document. dstat-0.7.4/docs/counter-rollovers.html000066400000000000000000000473161351755116500202070ustar00rootroot00000000000000 All you ever wanted to know about counter-rollovers in Dstat

What you need to know about counter rollovers

Unfortunately, Dstat is susceptible for counter rollovers, which may give you bogus performance output. Linux currently implements counters as 32bit values (not sure on 64bit platforms). This means a counter can go up to 2^32 (= 4294967296 = 4G) values.

Especially for network devices (which are calculated in bytes) this is too much as it means every 4GB, the counter is reset to 0. On a 1Gbps interface that is fully used, this happens every 32 seconds. On 2 bonded 10Gbps interfaces, this happens after 1.6 seconds.

Since /proc is updated every second, this becomes almost impossible to catch.

How does this impact Dstat ?

Currently Dstat has a problem if you specify delays that are too big. I.e. using 60 or 120 seconds delay in Dstat will make Dstat check these counters only once per minute or every two minutes.

In case a counter rolls over, it may be lower than the previous value, or worse, the value may actually be higher. In the first case we compensate because we now a rollover (at least one) happened, but if in the interval more than one rollover happened, you’re screwed. If however the rollover causes a higher value than the previous in that interval, you’re screwed too :-)

In both situations that you’re screwed, we cannot help you either because we don’t know. So we cannot compensate or even notify the user of the problem. This is very problematic, and it’s important you are aware of this.

What are the solutions ?

The only fix for Dstat is to check more often than the specified delay. Unfortunately, this requires a re-design (or an ugly hack). But if rollovers happen more than once (or values are larger than the max value) we cannot fix this.

There are plans to use 64bit counters on Linux and/or changing the output from using bytes to kbytes. None of this is sure. (add pointers to threads)

What can I do ?

Since this is Open Source, you are free to fix this and send me the fix. Or help with a redesign of Dstat to overcome this problem. Also look at the TODO file to see what other changes are expected in a redesign of Dstat.

Since I have a lot of other responsibilities and am currently not using Dstat for something where this problem matters much, I will have no time to look at it closely (unless the fix or the redesign is made fairly simple). It all depends on how quick I think I can fix/redesign it and how much time I have.

Your help could be to reduce the time it takes for me to fix it :)

Note
Please send me improvements to this document.

dstat-0.7.4/docs/cplugins.adoc000066400000000000000000000004611351755116500162570ustar00rootroot00000000000000Defining Python class methods in C http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/54352 python class in c / global calls http://mail.python.org/pipermail/python-list/2002-August/160784.html How to create a python class in C http://mail.python.org/pipermail/python-list/2000-August/047158.html dstat-0.7.4/docs/cplugins.html000066400000000000000000000413551351755116500163240ustar00rootroot00000000000000

dstat-0.7.4/docs/dstat-paper.adoc000066400000000000000000001045551351755116500166700ustar00rootroot00000000000000= Dstat: pluggable real-time monitoring Dag Wieers $Id$ 'This Dstat paper was originally written for LinuxConf Europe that was held together with the Linux Kernel summit at the University in Cambridge, UK in August 2007.' == Introduction Many tools exist to monitor hardware resources and software behaviour, but few tools exist that allow you to easily monitor any conceivable counter. Dstat was designed with the idea that it should be simple to plug in a piece of code that extracts one or more counters, and make it visible in a way that visually pleases the eye and helps you extract information in real-time. By being able to select those counters that you want (and likely those counters that matter to you in the job you're doing) you make it easier to correlate raw numbers and see a pattern that may otherwise not be visible. == A case for Dstat A few years ago I was involved in a project that was testing a storage cluster with a SAN back-end using GPFS and Samba for a broadcasting company. The performance tests that were scheduled together with the customer took a few weeks to measure the different behaviour under different stresses. During these tests there was a need to see how each of the components behaved and to find problematic behaviour during testing. Also, because it involved 5 GPFS nodes, we needed to make sure that the load was spread evenly during the test. If everything went well repeatedly, the results were validated and the next batch of tests could be prepared and run. We started off using different tools at first, but the more counters we were trying to capture the harder it was to post-process the information we had collected. What's more, we often saw only after performing the tests that the data was not representative because the numbers didn't add up. Sometimes it was caused by the massive setup of clients that were autonomously stressing the cluster. On other occasions we noticed that the network was the culprit. All in all, we lost time because we could only validate the results by relating numbers after the tests were complete and not during the tests. Complicating the matter was the fact that 5 different nodes were involved and using the normal command line tools like vmstat, iostat or ifstat (which only showed us a small part of what was happening) was problematic as each needed a different terminal. Besides, not all information was interesting. Eventually Dstat was born, to make a dull task more enjoyable. After the project was finished I was able to correlate system resources with network throughput, TCP information, Samba sessions, GPFS throughput, accumulated block device throughput, HBA throughput, all within a single interval on one screen for the complete cluster. == Dstat characteristics There are many ideas incorporated into Dstat by design, and this section serves to list all of them. Not all of them may appeal to the task you're doing, but the combination may make it an appealing proposition nevertheless. === History of counters An important characteristic in line-based tools like vmstat, iostat or ifstat is the fact that you can compare historical collected data with new data. This allows you to have a good feeling of how something is evolving. Compare this to tools like top or nmon, where data is often being refreshed and you loose historical information (but in return can provide you with a lot more information at the same time). === Adding unit indication It was very important that when numbers were compared, they were in the same unit, and not eg. a different power exponent. The human mind sometimes works in mysterious ways and more so when working with numbers for hours and hours. Adding the unit is something very convenient and may reduce the human error factor. Additionally, indicating the unit also makes sure that the columns have a fixed width. Often when using vmstat or other tools, the columns tend to shift depending on the width of the counter. This makes it very inconvenient to find counters in the shifted output. === Colour highlighting units After I added colours to help improve indicating units, I noticed that the colours also helped to show patterns. This of course is very limited, nevertheless it instantly shows when numbers are flat or changes are taking place. IMPORTANT: The colours are arbitrarily chosen. Do not make the mistake to assume that green means good and red means bad. There is no real meaning to the colour itself, however a change of colour does mean that a value has gone over some pre-defined limit. === Intermediate updates During tests, when you choose to see average values over a given time, it can be useful to see how the averages evolve. Dstat, by default, displays intermediate updates. This means that if you select to see 10 second averages, after each second you see the accumulated average over the timespan. *This means that after 4 seconds with intermediate updates, you see an average taken over the 4 second timeframe.* NOTE: This means that the closer you get to the given timeframe (eg. 10 seconds) the more likely that it nears its final average over that period. === Adding custom counters Dstat was specifically designed to enable anyone to add their own counters in a matter of minutes. The plugin-based system takes care of displaying, colouring and adding units to the counters. As a plugin-writer, you only have to focus on extracting the counters from the kernel (procfs or sysfs), logfiles or daemons. === Selecting plugins and counters Being able to add custom counters is important, but selecting those counters that you really need is even more important if you want to correlate counters and see patterns. Less is more. NOTE: In fact, Dstat currently does not allow you to select just counters, it only allows you to select plugins. However, since you can modify or fork a plugin, you still have the ability to select just those counters you prefer. === Exporting to CSV Having information on screen is one thing, you most likely need some hard evidence later to make your case. (Why else do all the work?) Dstat allows to write out all counters in the greatest detail possible to CSV. By default it also adds the command-line used for generating the output, as well as a date and time stamp. Since Dstat in the first place is meant for human-readable real-time statistics, it will by default also display the counters to screen (unless you _/dev/null_ it). TIP: Dstat appends to the output file so that you can add tests-results of different tests to a single file. However, make sure that you tag each test properly (eg. by using distinct filenames for each different test). === Time-plugin included It may seem a small thing, but having exact time (and date) information for your counters allows for a completely different usage as well. By adding simple date and time information, Dstat can be used as a background process in a screen to monitor the behaviour of your system during the night. This proves to be very valuable for example, to find offending processes during nightly tasks or to pinpoint their behaviour to certain events that you cannot monitor during working hours. It is also important when you have multiple Dstats running (eg. for nodes in a cluster) to correlate counters between the outputs. === Terminal capabilities Dstat also takes into account the width and height of your terminal window and modifies output to fit into your terminal. This, of course, has no effect on what ends up in the CSV output. Another (debatable) useful feature is that Dstat will modify the terminal title to indicate on what system it was run and what options were used. Especially when monitoring nodes in a cluster, this can be useful, but even in Gnome finding your Dstat window is handy. WARNING: Some people however are annoyed by the fact that their distribution does not reset the terminal title and Dstat therefor messes it up. There is no way for Dstat to fix this. == Plugins and counters When we talk about plugins, we make a distinction between those plugins that are included within the Dstat tool itself, and those that ship with it externally. In essence there is no real difference, as the internal plugins could easily have been created as an external plugin. The basic difference is that the internal plugins have no dependencies except on procfs. Having the basic plugins as part of Dstat, makes sure that Dstat can be moved as a self-contained file to other systems. === Internal plugins The plugins that have been selected to be part of the Dstat tool itself, and therefor have no dependencies other than procfs, are: - aio: asynchronous I/O counters - cpu, cpu24: CPU counters (+-c+ and +-C+) - disk, disk24, disk24old: disk counters (+-d+ and +-D+) - epoch: seconds since Epoch (+-T+) - fs: file system counters - int, int24: interrupts per IRQ (+-i+ and +-I+) - io: I/O requests completed (+-r+) - ipc: IPC counters - load: load counters (+-l+) - lock: locking counters - mem: memory usage (+-m+) - net: network usage (+-n+ and +-N+) - page, page24: paging counters (+-g+) - proc: process counters (+-p+) - raw: raw socket counters - swap, swapold: swap usage (+-s+ and +-S+) - socket: socket counters - sys: system (kernel) countersA (+-y+) - tcp: TCP socket counters - time: date and time (+-t+) - udp: UDP socket counters - unix: unix socket counters - vm: virtual memory counters For backward compatibility with older kernels there is a cascading system that selects the most appropriate internal plugin for your kernel. (eg. the +dstat_disk+ plugin falls back to +dstat_disk24+ and +dstat_disk24old+) At this moment there is no such system for external plugins. === External plugins This basic functionality is easily extended by writing your own plugins (subclasses of the python Dstat class) which are then inserted at runtime into Dstat. A set of 'external' modules exist for: - battery: battery usage - battery-remain: remaining battery time - cpufreq: CPU frequency - dbus: DBUS connections - disk-tps: disk transactions counters - disk-util: disk utilization percentage - dstat: dstat cputime consumption and latency - dstat-cpu: dstat advanced cpu usage - dstat-ctxt: dstat context switches - dstat-mem: dstat advanced memory usage - fan: Fan speed - freespace: free space on filesystems - gpfs: GPFS IO counters - gpfs-ops: GPFS operations counters - helloworld: Hello world dispenser - innodb-buffer: innodb buffer counters - innodb-io: innodb I/O counters - innodb-ops: innodb operations counters - lustre: lustre throughput counters - memcache-hits: Memcache hit counters - mysql5-cmds: MySQL communication counters - mysql5-conn: MySQL connection counters - mysql5-io: MySQL I/O counters - mysql5-keys: MySQL keys counters - mysql-io: MySQL I/O counters - mysql-ops: MySQL operations counters - net-packets: number of packets received and transmitted - nfs3: NFS3 client counters - nfs3-ops: NFS3 client operations counters - nfsd3: NFS3 server counters - nfsd3-ops: NFS3 server operations counters - ntp: NTP time counters - postfix: postfix queue counters - power: Power usage counters - proc-count: total number of processes - qmail: qmail queue sizes - rpc: RPC client counters - rpcd: RPC server counters - sendmail: sendmail queue counters - snooze: Dstat time delay counters - squid: squid usage statistics - thermal: Thermal counters - top-bio: most expensive block I/O process - top-bio-adv: most expensive block I/O process (advanced) - top-cpu: most expensive cpu process - top-cpu-adv: most expensive CPU process (advanced) - top-cputime: process using the most CPU time - top-cputime-avg: process having the highest average CPU time - top-int: most frequent interrupt - top-io: most expensive I/O process - top-io-adv: most expensive I/O process (advanced) - top-latency: process with the highest total latency - top-latency-avg: process with the highest average latency - top-mem: most expensive memory process - top-oom: process first shot by OOM killer - utmp: utmp counters - vm-memctl: VMware guest memory counters - vmk-hba: VMware kernel HBA counters - vmk-int: VMware kernel interrupt counters - vmk-nic: VMware kernel NIC counters - vz-cpu: OpenVZ CPU counters - vz-io: I/O usage per OpenVZ guest - vz-ubc: OpenVZ user beancounters - wifi: WIFI quality information === Most-wanted plugins Hoping someone interested reads this document, I added a few plugins that would be ``very nice'' to have but are currently lacking: - slab: needs a VM expert to make sense out of the vast amount of data - xorg: need information on how to get X resources, would be nice to see evolution of X resources over time - samba: lacking information to get counters from Samba without forking smbstatus every second - snmp: could be useful to relate counters from different systems in a single Dstat - topx: display the most expensive X application(s) - systemtap: connecting Dstat to systemtap counters Creative souls with other ideas are welcome as well ! == Using Dstat Central to the Dstat command line interface is the selection of plugins. The selection and order of options influence the Dstat output directly. === Enabling plugins The internal plugins have short and/or long options within Dstat, eg. +-c+ or +--cpu+ will enable the cpu counters. The external plugins are enable by a long option including their name, eg. +--top-cpu+ The following examples will enable the time, cpu and disk plugins, and are equal. ---- dstat -tcd dstat --time --cpu --disk ---- === Total or individual counters Some of the plugins can show both total values or individual values and therefor have an extra option to influence this decision. ---- dstat -d -D sda,sdb dstat -n -N eth0,eth1 dstat -c -C total,0,1 ---- You can show both the individual values and total values as follows: ---- [dag@horsea ~]$ dstat -d -D total,hda,hdc -dsk/total----dsk/hda-----dsk/hdc-- read writ: read writ: read writ 1384k 1502k: 114k 1332k: 81k 359B 0 44k: 0 44k: 0 0 0 0 : 0 0 : 0 0 ---- The special +-f+ or +--full+ option allows to select individual counters by default, and can be overruled by +-C+, +-D+, +-I+, +-N+ or +-S+. === Influencing output Dstat has a few more options to influence its output. With the +--nocolor+ one can disable colours. The +--noheaders+ option disables repeating headers. The +--noupdate+ option disables intermediate updates. The +--output+ option is used for writing out to a CSV file. === Plugin search path Dstat looks in the following places for plugins. This allows a user without root privileges to use some extra plugins. - ~/.dstat/ - /plugins/ - /usr/share/dstat/ - /usr/local/share/dstat/ The option +--list+ shows the available plugins and their location in the order that the plugin search path is used. NOTE: Plugins are named +dstat_.py+. == Use-cases Below are some use-cases to demonstrate the usage of Dstat. WARNING: The following examples do not look as nice as they do on screen because this document is not printed in colour (and I did not prepare it in colour :-)). === Simple system check Let's say you quickly want to see if the system is doing alright. In the past this probably was a +vmstat 1+, as of now you would do: ---- dstat -taf ---- .Sample output ---- [dag@rhun dag]$ dstat -taf -----time----- -------cpu0-usage------ --dsk/sda-----dsk/sr0-- --net/eth1- ---paging-- ---system-- date/time |usr sys idl wai hiq siq| read writ: read writ| recv send| in out | int csw 02-08 02:42:48| 10 2 85 2 0 0| 22k 23k: 1.8B 0 | 0 0 |2588B 2952B| 558 580 02-08 02:42:49| 4 3 93 0 0 0| 0 0 : 0 0 | 0 0 | 0 0 |1116 962 02-08 02:42:50| 5 2 90 0 2 1| 0 28k: 0 0 | 0 0 | 0 0 |1380 1136 02-08 02:42:51| 11 6 82 0 1 0| 0 0 : 0 0 | 0 0 | 0 0 |1277 1340 02-08 02:42:52| 3 3 93 0 1 0| 0 84k: 0 0 | 0 0 | 0 0 |1311 1034 ---- NOTE: The +-t+ here is completely optional and generally wastes space. But often you are not monitoring for 10 seconds but rather measure in minutes or hours. Having a general idea on what timescale counters have been averaged is nevertheless interesting. === What is this system doing now ? I often run both the +dstat_top_cpu+ and +dstat_top_mem+ programs on a system, just to see what a system is doing. Having a quick look at what application is using the most CPU over a few minutes and to see what the general usage of memory is of the top application gives away a lot about a system. .Sample output ---- [dag@horsea dag]$ dstat -c --top-cpu -dng --top-mem ----total-cpu-usage---- -most-expensive- -dsk/total- -net/total- ---paging-- -most-expensive- usr sys idl wai hiq siq| cpu process | read writ| recv send| in out | memory process 9 2 80 9 0 0|kswapd 0| 123k 164k| 0 0 |9196B 18k|rsync 74M 2 3 95 0 0 0|sendmail 1| 0 168k|2584B 39k| 0 0 |rsync 74M 18 3 79 0 0 0|httpd 17| 0 88k|5759B 118k| 0 0 |rsync 74M 3 2 94 1 0 0|sendmail 1|4096B 0 |2291B 4190B| 0 0 |rsync 74M 2 3 95 0 0 0|httpd 1| 0 0 |2871B 3201B| 0 0 |rsync 74M 10 7 83 0 0 0|httpd 13| 0 0 |2216B 10k| 0 0 |rsync 74M 2 2 96 0 0 0| | 0 52k| 724B 2674B| 0 0 |rsync 74M ---- === What process is using all my CPU, memory or I/O at 4:20 AM ? Imagine the monitoring team notices strange peaks, a system engineer got a worthless message, the system was swapping extensively, a process got killed. Something indicates the system is doing something unexpected but what is causing it and why ? As of now you can do: ---- screen dstat -tcy --top-cpu 120 screen dstat -tmgs --top-mem 120 screen dstat -tdi --top-io 120 ---- to see what process is using the most CPU, the most memory and the most I/O resources. And hopefully one day we can do: ---- dstat -tn --top-net 120 dstat -tn --top-x 120 ---- Leave it running during the night and in the morning you can see the light. === How much ticks per second on my kernel ? In some cases it can be useful to see how many ticks (timer interrupts) your kernel is producing. With older kernels this is a fixed number (usually 100, 250 or 1000) but on newer kernels the number can be dynamic. Also on VMware virtual machines, the number of ticks can cause clock issues, so in that case if you want to see what is happening, you can simply do: ---- dstat -ti -I0 --snooze --debug ---- Dstat nowadays can also detect lost ticks (when the number of ticks do not match the time progress. This is useful to correlate VM issues with other problems. //// === Monitoring memory consumption of a process over time Now, I have twice used Dstat to verify memory usage. And I have concluded that 2 programs have severe memory leaks. One, unsurprisingly, is Firefox, the other sadly is wnck-applet (yes, unfortunately). Now Dstat is currently not really useful for specifying your own process to monitor (unless you dig into the module, which is easier than one might expect). But I am already anticipating Pstat, which is a Dstat but for process-related counters. More on this later... //// === What device is slowing down my system ? A nice feature of Dstat is that it can show how many interrupts each of your devices is generating. The 'cpu' stats already show this in percentage as 'hard interrupt' and 'soft interrupt', and the 'sys' stats shows the total number of interrupts, but the 'int' stats go into detail. And you can specify exactly what IRQs you want to watch. Many devices generate interrupts, especially when used at maximum capacity. Sometimes too many interrupts can slow down a system. If you want to correlate bad performance with hardware interrupts, you can run a command like: ---- dstat -tyif dstat -tyi -I 12,58,iwlagn -f 5 ---- Much like +watch -n1 -d cat /proc/interrupts+ on steroids. ---- dstat -t -y -i -f ---- which then results in: .Sample output ---- [dag@rhun ~]$ dstat -t -y -i -f 5 -----time----- ---system-- -------------------interrupts------------------ date/time | int csw | 1 9 12 14 15 58 177 185 13-08 21:52:53| 740 923 | 1 0 18 5 1 17 4 131 13-08 21:52:58|1491 2085 | 0 4 351 1 2 37 0 97 13-08 21:53:03|1464 1981 | 0 0 332 1 3 31 0 96 13-08 21:53:08|1343 1977 | 0 0 215 1 2 32 0 93 13-08 21:53:13|1145 1918 | 0 0 12 0 3 33 0 95 ---- When having the following hardware: ---- [dag@rhun ~]$ cat /proc/interrupts CPU0 0: 143766685 IO-APIC-edge timer 1: 374043 IO-APIC-edge i8042 9: 102564 IO-APIC-level acpi 12: 4481057 IO-APIC-edge i8042 14: 1192508 IO-APIC-edge libata 15: 358891 IO-APIC-edge libata 58: 4391819 IO-APIC-level ipw2200 177: 993740 IO-APIC-level Intel ICH6 185: 33542364 IO-APIC-level yenta, uhci_hcd:usb1, eth0, i915@pci:0000:00:02.0 NMI: 0 LOC: 143766578 ERR: 0 MIS: 0 ---- Or select specific interrupts: ---- dstat -t -y -i -I 12,58,185 -f 5 ---- Another possibility is to use the +--top-int+ plugin, showing you the most frequent plugin on your system: ---- [dag@rhun ~]# dstat -t --top-int ----system---- ---most-frequent---- time | interrupt 11-06 08:34:53|ahci 5 11-06 08:34:54|i8042 69 11-06 08:34:55|i8042 45 11-06 08:34:56|ehci/usb2 12 11-06 08:34:57| ---- === How does my WIFI signal evolve when I move my laptop or AP through the house ? Something I was looking into when trying to find the optimal location for the WIFI access point. However I must say that another tool I wrote 'Dwscan' is currently more sophisticated. ---- dstat -t --wifi ---- === Is my SWRAID performing as it claims ? You can monitor I/O throughput for any block device. By default dstat limits itself to real block devices to prevent having the same I/O to be counted more than once, but if you want to monitor a SWRAID device, or a multipath device, you can simply do that by doing: ---- dstat -td -D md0,md1,sda,sdb,hda ---- == Writing your own Dstat plugin Dstat is completely written in python and this makes it extremely convenient to write your own plugins. The many plugins that come with Dstat are an excellent source of information if you want to write your own. === Introducing the hello world plugin The following plugin does nothing more than write "Hello world!" to its output. .The dstat_helloworld plugin in its full glory. ---- class dstat_helloworld(dstat): ``" Example "Hello world!" output plugin for aspiring Dstat developers. ``" def __init__(self): self.name = 'plugin title' <1> self.nick = ('counter',) <2> self.vars = ('text',) <3> self.type = 's' <4> self.width = 12 <5> self.scale = 0 <6> def extract(self): self.val['text'] = 'Hello world!' <7> ---- In this example, there are several components: 1. +self.name+ contains the plugin's visible title. 2. +self.nick+ is a list of the counter names 3. +self.vars+ is a list of the variable names for each counter 4. +self.type+ defines the counter type: string, percentage, integer, float 5. +self.width+ defines the column width 6. +self.scale+ influences the coloring and unit type 7. +self.val+ contains the counter values that are being displayed === Parsing counters The following example shows how information is collected and counters are processed. It also includes a +check()+ method to properly bail out when the system fails to meet some plugin criteria. .The dstat_postfix plugin ---- class dstat_postfix(dstat): def __init__(self): self.name = 'postfix' self.nick = ('inco', 'actv', 'dfrd', 'bnce', 'defr') self.vars = ('incoming', 'active', 'deferred', 'bounce', 'defer') self.type = 'd' <1> self.width = 4 self.scale = 100 def check(self): <2> if not os.access('/var/spool/postfix/active', os.R_OK): raise Exception, 'Cannot access postfix queues' def extract(self): for item in self.vars: <3> self.val[item] = len(glob.glob('/var/spool/postfix/'+item+'/*/*') ---- This example shows the following items: 1. type, width and scale specify decimal, column width a,d coloring based on multiplication of 100 2. The +check()+ method tests conditions and bails out of they are not met 3. To make processing easier we have opted to use as value names (+self.vars+) the name of the postfix queues and store counts in +self.val+ === Opening files Dstat provides its own +dopen()+ function to plugins. Using +dopen()+ instead of +open()+ plugins do not need to reopen files to update their counters. But this is only useful when plugins open a few files. For eg. opening _/proc/pid_ files the number of open files would only be increasing as the number of processes increases. === Piping to an application Dstat provides its own +dpopen()+ function to plugins. This function allows the plugin to open stdin, stdout and stderr pipes for 2-way communication with processes. To see this in action, take a look at the +dstat_gpfs+ plugins or the +dstat_mysql+ plugins. Piping to an application is more expensive than getting kernel counters from _/proc_, but it beats having to run a program and capturing the output. == Known issues There are some known issues that are important to understand when using Dstat. === Writing Dstat and plugins in C It makes sense to reimplement Dstat or some of its plugins in C and still allow the writing of Python (or even Perl) plugins. Tests have shown that for example processing _/proc/pid_ in C makes the plugin 3 times faster. And this did not take into account the processing of the results and displaying the output. So rewriting in C makes a lot of sense, but it is also much more complicated. === Python 1.5 There used to be a Python 1.5 version of Dstat, but with RHEL2 going out of support in 2009 I decided to no longer spend the extra effort to sync and test the Dstat15 version. Leaving Python 1.5 behind means that plugins do not longer have to be compatible with Python 1.5 either. It is no coincedence that after this event a major overhaul was made to the plugin interface. === Counter rollovers Unfortunately Dstat is susceptible for counters that ``rollover''. This means that a counter gets bigger than its maximum value the data-structure is capable of storing. As a result the counter is reset. For some architectures and some counters, Linux implements 32bit values, this means that such counter can go up to 2^32 (= 4294967296B = 4G) values. For example the network counters are calculated in absolute bytes. Every 4GB that is being transferred over the network will cause a counter reset. For example on a bonded 2x10Gbps interfaces that is using its theoretical transfer limit, this would happen every 1.6 seconds. Since _/proc_ is updated every second, this would be impossible for Dstat to catch. Currently if Dstat encounters a negative difference for an interval it assumes a single rollover has happened and compensates for it. If that assumption is wrong, the user is working with wrong counters nonetheless. If you suspect that the behaviour of your system is susceptible of counter rollovers, make sure you take this into account when using Dstat (or any other tool that uses these counters for that matter). TIP: Shipped with the Dstat documentation there is a document (_counter-rollovers.txt_) that goes deeper into counter rollovers. If this affects you, read that document and contact me for possible implementation changes to improve handling them. == Dstat performance As mentioned several times now, Dstat is written in python. There are various reasons that Python was chosen and the most important reason is that we target system engineers and users, so we need to simplify writing plugins, processing counters and lowers the bar for people to contribute changes. The downside of choosing a scripting language is that it is slower than if it would be written in C, obviously. *Dstat is not optimised for performance.* NOTE: This may seem ironic: a performance monitoring tool that is not optimised for performance, but rather for flexibility. However the ease of writing plugins and prototyping gets precedence over performance at this time. On the other hand we have pretty good tools to measure the overhead of a single plugin and profiling infrastructure to counter any excuses for sloppy plugin development. === Plugin performance If we look at the basic plugins, there are no real performance issues with Dstat. Loading Dstat takes longer to start than eg. vmstat, but once running, Dstat's performance for the same functionality is up to par with vmstat, ifstat and other similar tools. However there are *some plugins that are much more resource intensive than others* and the selection of plugins determines Dstat's performance in a major way. === Performance monitoring Dstat Dstat comes with some plugins (starting with +dstat_+) to check the overhead of itself, this together with the selection of plugins makes it very convenient to measure the overhead of individual plugins. The following options exist (as plugins): --dstat:: Provides cputime and latency information for Dstat. This plugin can help you determine how accurate and how much overhead Dstat has with its current plugins enabled. --dstat-cpu:: Provides cpu utilization (user-space and kernel-space) statistics for Dstat. This plugin can help determine where there is some room for improvement for individual plugins (or Dstat itself). --dstat-ctxt:: Provides context switch information for Dstat. Both voluntary as well ass involuntary context switches are shown, providing you with some idea of how the system is providing timeslices and how Dstat is returning the cpu to the system. --dstat-mem:: Provides memory information about the Dstat process. This plugin enables plugin developers to determine whether Dstat is increasing its memory usage and therefor is 'leaking' memory over time. This plugin proved very useful in optimizing memory usage of the top-plugins, which typically scan all processes. --snooze:: This plugin shows in milliseconds how much time is deviating from the previous run. Which is influenced by the time it takes for earlier stats to be calculated. So the output of this plugin is very dependant on the location on the command-line. --debug:: This option is not a plugin, but internal to Dstat. It will cause Dstat to show the actual time in milliseconds from start to end at the end of each line. This should be more or less close to the output of the +dstat_dstat+ and +dstat_dstat_cpu+ plugins. + It also influences the internal +dstat_time+ plugin to show milliseconds instead of seconds, which may help showing the accuracy of Dstat itself. --profile:: Ths option is also not a plugin, but internal to Dstat. It provides you with detailed profiling information at the end of each run. The default settings can be changed inside Dstat (or a copy) to tweak the output you are looking for. It creates a termporary profiling file in the current directory when running, but will clean it up after exit. // FIXME: Please improve the examples by using the --dstat plugins === Measuring plugins Here is a small example of how one can measure the impact of a plugin. .The cost of running the timer plugin ---- [dag@rhun dag]$ dstat -t --debug Module dstat_time -----time----- date/time 19-08 20:34:21 5.90ms 19-08 20:34:22 0.17ms 19-08 20:34:23 0.18ms 19-08 20:34:24 0.18ms ---- Compare this with other plugins to see what the cost is of an individual plugin. .The cost of running the +dstat_cpu+ plugin ---- [dag@rhun dstat]$ dstat -c --debug Module dstat_cpu requires ['/proc/stat'] ----total-cpu-usage---- usr sys idl wai hiq siq 15 3 77 4 0 1 11.07ms 5 3 92 0 0 0 0.66ms 5 4 91 0 0 0 0.65ms 5 3 92 0 0 0 0.66ms ---- As you can see, getting the CPU counters and calculating the CPU usage takes up 0.5 milliseconds on this particular system. But if we look at the usage of the +dstat_top_cpu+ plugin: .The cost of running the +dstat_top_cpu+ plugin ---- [dag@rhun dstat]$ dstat --top-cpu --debug Module dstat_top_cpu -most-expensive- cpu process Xorg 2 43.82ms Xorg 1 33.23ms firefox-bin 2 33.54ms Xorg 1 33.24ms ---- we see that processing the _/proc/pid_ files causes the top-cpu plugin to use an additional 33ms. WARNING: These values show the time it takes to process the plugins and does not indicate the amount of CPU usage Dstat consumes. This obviously means that the process time of plugins depends on how much the system is being stressed as well as on what the plugin exactly is doing. Plugins that communicate with other processes or those that process lots of information (eg. communicating with the mysql client, or processing the mail queue) may not actually use any local resources, but the latency causes Dstat to slow down processing other counters. // FIXME: Write about profiling infrastructure == Future development The Dstat release contains a _TODO_ file highlighting all the items and ideas that have been played with. Here is a list of the most important ones: - Output * Changes in how Dstat colours digits within a value (the 6 in 6134B) - Exporting information * Connecting Dstat with rrdtool * Exporting to syslog or remote syslog (a way to transport counters ?) - Plugins * Be smart when plugins are loaded more than once (some plugins could benefit) * Add more plugins - Redesign Dstat * Create an object-model and namespace for plugins and counters so that other tools can be based on Dstat == Links - http://dag.wieers.com/home-made/dstat/[Dstat homepage] - http://svn.rpmforge.net/svn/trunk/tools/dstat/[Dstat subversion] - http://lists.rpmforge.net/mailman/listinfo/tools[Dstat mailinglist] // vim: set syntax=asciidoc: dstat-0.7.4/docs/dstat-paper.html000066400000000000000000002006171351755116500167220ustar00rootroot00000000000000 Dstat: pluggable real-time monitoring

This Dstat paper was originally written for LinuxConf Europe that was held together with the Linux Kernel summit at the University in Cambridge, UK in August 2007.

Introduction

Many tools exist to monitor hardware resources and software behaviour, but few tools exist that allow you to easily monitor any conceivable counter.

Dstat was designed with the idea that it should be simple to plug in a piece of code that extracts one or more counters, and make it visible in a way that visually pleases the eye and helps you extract information in real-time.

By being able to select those counters that you want (and likely those counters that matter to you in the job you’re doing) you make it easier to correlate raw numbers and see a pattern that may otherwise not be visible.

A case for Dstat

A few years ago I was involved in a project that was testing a storage cluster with a SAN back-end using GPFS and Samba for a broadcasting company. The performance tests that were scheduled together with the customer took a few weeks to measure the different behaviour under different stresses.

During these tests there was a need to see how each of the components behaved and to find problematic behaviour during testing. Also, because it involved 5 GPFS nodes, we needed to make sure that the load was spread evenly during the test. If everything went well repeatedly, the results were validated and the next batch of tests could be prepared and run.

We started off using different tools at first, but the more counters we were trying to capture the harder it was to post-process the information we had collected. What’s more, we often saw only after performing the tests that the data was not representative because the numbers didn’t add up. Sometimes it was caused by the massive setup of clients that were autonomously stressing the cluster. On other occasions we noticed that the network was the culprit. All in all, we lost time because we could only validate the results by relating numbers after the tests were complete and not during the tests.

Complicating the matter was the fact that 5 different nodes were involved and using the normal command line tools like vmstat, iostat or ifstat (which only showed us a small part of what was happening) was problematic as each needed a different terminal. Besides, not all information was interesting.

Eventually Dstat was born, to make a dull task more enjoyable.

After the project was finished I was able to correlate system resources with network throughput, TCP information, Samba sessions, GPFS throughput, accumulated block device throughput, HBA throughput, all within a single interval on one screen for the complete cluster.

Dstat characteristics

There are many ideas incorporated into Dstat by design, and this section serves to list all of them. Not all of them may appeal to the task you’re doing, but the combination may make it an appealing proposition nevertheless.

History of counters

An important characteristic in line-based tools like vmstat, iostat or ifstat is the fact that you can compare historical collected data with new data. This allows you to have a good feeling of how something is evolving.

Compare this to tools like top or nmon, where data is often being refreshed and you loose historical information (but in return can provide you with a lot more information at the same time).

Adding unit indication

It was very important that when numbers were compared, they were in the same unit, and not eg. a different power exponent. The human mind sometimes works in mysterious ways and more so when working with numbers for hours and hours. Adding the unit is something very convenient and may reduce the human error factor.

Additionally, indicating the unit also makes sure that the columns have a fixed width. Often when using vmstat or other tools, the columns tend to shift depending on the width of the counter. This makes it very inconvenient to find counters in the shifted output.

Colour highlighting units

After I added colours to help improve indicating units, I noticed that the colours also helped to show patterns. This of course is very limited, nevertheless it instantly shows when numbers are flat or changes are taking place.

Important
The colours are arbitrarily chosen. Do not make the mistake to assume that green means good and red means bad. There is no real meaning to the colour itself, however a change of colour does mean that a value has gone over some pre-defined limit.

Intermediate updates

During tests, when you choose to see average values over a given time, it can be useful to see how the averages evolve. Dstat, by default, displays intermediate updates. This means that if you select to see 10 second averages, after each second you see the accumulated average over the timespan. This means that after 4 seconds with intermediate updates, you see an average taken over the 4 second timeframe.

Note
This means that the closer you get to the given timeframe (eg. 10 seconds) the more likely that it nears its final average over that period.

Adding custom counters

Dstat was specifically designed to enable anyone to add their own counters in a matter of minutes. The plugin-based system takes care of displaying, colouring and adding units to the counters. As a plugin-writer, you only have to focus on extracting the counters from the kernel (procfs or sysfs), logfiles or daemons.

Selecting plugins and counters

Being able to add custom counters is important, but selecting those counters that you really need is even more important if you want to correlate counters and see patterns. Less is more.

Note
In fact, Dstat currently does not allow you to select just counters, it only allows you to select plugins. However, since you can modify or fork a plugin, you still have the ability to select just those counters you prefer.

Exporting to CSV

Having information on screen is one thing, you most likely need some hard evidence later to make your case. (Why else do all the work?)

Dstat allows to write out all counters in the greatest detail possible to CSV. By default it also adds the command-line used for generating the output, as well as a date and time stamp. Since Dstat in the first place is meant for human-readable real-time statistics, it will by default also display the counters to screen (unless you /dev/null it).

Tip
Dstat appends to the output file so that you can add tests-results of different tests to a single file. However, make sure that you tag each test properly (eg. by using distinct filenames for each different test).

Time-plugin included

It may seem a small thing, but having exact time (and date) information for your counters allows for a completely different usage as well. By adding simple date and time information, Dstat can be used as a background process in a screen to monitor the behaviour of your system during the night.

This proves to be very valuable for example, to find offending processes during nightly tasks or to pinpoint their behaviour to certain events that you cannot monitor during working hours.

It is also important when you have multiple Dstats running (eg. for nodes in a cluster) to correlate counters between the outputs.

Terminal capabilities

Dstat also takes into account the width and height of your terminal window and modifies output to fit into your terminal. This, of course, has no effect on what ends up in the CSV output.

Another (debatable) useful feature is that Dstat will modify the terminal title to indicate on what system it was run and what options were used. Especially when monitoring nodes in a cluster, this can be useful, but even in Gnome finding your Dstat window is handy.

Warning
Some people however are annoyed by the fact that their distribution does not reset the terminal title and Dstat therefor messes it up. There is no way for Dstat to fix this.

Plugins and counters

When we talk about plugins, we make a distinction between those plugins that are included within the Dstat tool itself, and those that ship with it externally. In essence there is no real difference, as the internal plugins could easily have been created as an external plugin. The basic difference is that the internal plugins have no dependencies except on procfs.

Having the basic plugins as part of Dstat, makes sure that Dstat can be moved as a self-contained file to other systems.

Internal plugins

The plugins that have been selected to be part of the Dstat tool itself, and therefor have no dependencies other than procfs, are:

  • aio: asynchronous I/O counters

  • cpu, cpu24: CPU counters (-c and -C)

  • disk, disk24, disk24old: disk counters (-d and -D)

  • epoch: seconds since Epoch (-T)

  • fs: file system counters

  • int, int24: interrupts per IRQ (-i and -I)

  • io: I/O requests completed (-r)

  • ipc: IPC counters

  • load: load counters (-l)

  • lock: locking counters

  • mem: memory usage (-m)

  • net: network usage (-n and -N)

  • page, page24: paging counters (-g)

  • proc: process counters (-p)

  • raw: raw socket counters

  • swap, swapold: swap usage (-s and -S)

  • socket: socket counters

  • sys: system (kernel) countersA (-y)

  • tcp: TCP socket counters

  • time: date and time (-t)

  • udp: UDP socket counters

  • unix: unix socket counters

  • vm: virtual memory counters

For backward compatibility with older kernels there is a cascading system that selects the most appropriate internal plugin for your kernel. (eg. the dstat_disk plugin falls back to dstat_disk24 and dstat_disk24old) At this moment there is no such system for external plugins.

External plugins

This basic functionality is easily extended by writing your own plugins (subclasses of the python Dstat class) which are then inserted at runtime into Dstat. A set of external modules exist for:

  • battery: battery usage

  • battery-remain: remaining battery time

  • cpufreq: CPU frequency

  • dbus: DBUS connections

  • disk-tps: disk transactions counters

  • disk-util: disk utilization percentage

  • dstat: dstat cputime consumption and latency

  • dstat-cpu: dstat advanced cpu usage

  • dstat-ctxt: dstat context switches

  • dstat-mem: dstat advanced memory usage

  • fan: Fan speed

  • freespace: free space on filesystems

  • gpfs: GPFS IO counters

  • gpfs-ops: GPFS operations counters

  • helloworld: Hello world dispenser

  • innodb-buffer: innodb buffer counters

  • innodb-io: innodb I/O counters

  • innodb-ops: innodb operations counters

  • lustre: lustre throughput counters

  • memcache-hits: Memcache hit counters

  • mysql5-cmds: MySQL communication counters

  • mysql5-conn: MySQL connection counters

  • mysql5-io: MySQL I/O counters

  • mysql5-keys: MySQL keys counters

  • mysql-io: MySQL I/O counters

  • mysql-ops: MySQL operations counters

  • net-packets: number of packets received and transmitted

  • nfs3: NFS3 client counters

  • nfs3-ops: NFS3 client operations counters

  • nfsd3: NFS3 server counters

  • nfsd3-ops: NFS3 server operations counters

  • ntp: NTP time counters

  • postfix: postfix queue counters

  • power: Power usage counters

  • proc-count: total number of processes

  • qmail: qmail queue sizes

  • rpc: RPC client counters

  • rpcd: RPC server counters

  • sendmail: sendmail queue counters

  • snooze: Dstat time delay counters

  • squid: squid usage statistics

  • thermal: Thermal counters

  • top-bio: most expensive block I/O process

  • top-bio-adv: most expensive block I/O process (advanced)

  • top-cpu: most expensive cpu process

  • top-cpu-adv: most expensive CPU process (advanced)

  • top-cputime: process using the most CPU time

  • top-cputime-avg: process having the highest average CPU time

  • top-int: most frequent interrupt

  • top-io: most expensive I/O process

  • top-io-adv: most expensive I/O process (advanced)

  • top-latency: process with the highest total latency

  • top-latency-avg: process with the highest average latency

  • top-mem: most expensive memory process

  • top-oom: process first shot by OOM killer

  • utmp: utmp counters

  • vm-memctl: VMware guest memory counters

  • vmk-hba: VMware kernel HBA counters

  • vmk-int: VMware kernel interrupt counters

  • vmk-nic: VMware kernel NIC counters

  • vz-cpu: OpenVZ CPU counters

  • vz-io: I/O usage per OpenVZ guest

  • vz-ubc: OpenVZ user beancounters

  • wifi: WIFI quality information

Most-wanted plugins

Hoping someone interested reads this document, I added a few plugins that would be “very nice” to have but are currently lacking:

  • slab: needs a VM expert to make sense out of the vast amount of data

  • xorg: need information on how to get X resources, would be nice to see evolution of X resources over time

  • samba: lacking information to get counters from Samba without forking smbstatus every second

  • snmp: could be useful to relate counters from different systems in a single Dstat

  • topx: display the most expensive X application(s)

  • systemtap: connecting Dstat to systemtap counters

Creative souls with other ideas are welcome as well !

Using Dstat

Central to the Dstat command line interface is the selection of plugins. The selection and order of options influence the Dstat output directly.

Enabling plugins

The internal plugins have short and/or long options within Dstat, eg. -c or --cpu will enable the cpu counters.

The external plugins are enable by a long option including their name, eg. --top-cpu

The following examples will enable the time, cpu and disk plugins, and are equal.

dstat -tcd
dstat --time --cpu --disk

Total or individual counters

Some of the plugins can show both total values or individual values and therefor have an extra option to influence this decision.

dstat -d -D sda,sdb
dstat -n -N eth0,eth1
dstat -c -C total,0,1

You can show both the individual values and total values as follows:

[dag@horsea ~]$ dstat -d -D total,hda,hdc
-dsk/total----dsk/hda-----dsk/hdc--
 read  writ: read  writ: read  writ
1384k 1502k: 114k 1332k:  81k  359B
   0    44k:   0    44k:   0     0
   0     0 :   0     0 :   0     0

The special -f or --full option allows to select individual counters by default, and can be overruled by -C, -D, -I, -N or -S.

Influencing output

Dstat has a few more options to influence its output. With the --nocolor one can disable colours. The --noheaders option disables repeating headers. The --noupdate option disables intermediate updates. The --output option is used for writing out to a CSV file.

Plugin search path

Dstat looks in the following places for plugins. This allows a user without root privileges to use some extra plugins.

  • ~/.dstat/

  • <binarypath>/plugins/

  • /usr/share/dstat/

  • /usr/local/share/dstat/

The option --list shows the available plugins and their location in the order that the plugin search path is used.

Note
Plugins are named dstat_<name>.py.

Use-cases

Below are some use-cases to demonstrate the usage of Dstat.

Warning
The following examples do not look as nice as they do on screen because this document is not printed in colour (and I did not prepare it in colour :-)).

Simple system check

Let’s say you quickly want to see if the system is doing alright. In the past this probably was a vmstat 1, as of now you would do:

dstat -taf
Sample output
[dag@rhun dag]$ dstat -taf
-----time----- -------cpu0-usage------ --dsk/sda-----dsk/sr0-- --net/eth1- ---paging-- ---system--
  date/time   |usr sys idl wai hiq siq| read  writ: read  writ| recv  send|  in   out | int   csw
02-08 02:42:48| 10   2  85   2   0   0|  22k   23k: 1.8B    0 |   0     0 |2588B 2952B| 558   580
02-08 02:42:49|  4   3  93   0   0   0|   0     0 :   0     0 |   0     0 |   0     0 |1116   962
02-08 02:42:50|  5   2  90   0   2   1|   0    28k:   0     0 |   0     0 |   0     0 |1380  1136
02-08 02:42:51| 11   6  82   0   1   0|   0     0 :   0     0 |   0     0 |   0     0 |1277  1340
02-08 02:42:52|  3   3  93   0   1   0|   0    84k:   0     0 |   0     0 |   0     0 |1311  1034
Note
The -t here is completely optional and generally wastes space. But often you are not monitoring for 10 seconds but rather measure in minutes or hours. Having a general idea on what timescale counters have been averaged is nevertheless interesting.

What is this system doing now ?

I often run both the dstat_top_cpu and dstat_top_mem programs on a system, just to see what a system is doing. Having a quick look at what application is using the most CPU over a few minutes and to see what the general usage of memory is of the top application gives away a lot about a system.

Sample output
[dag@horsea dag]$ dstat -c --top-cpu -dng --top-mem
----total-cpu-usage---- -most-expensive- -dsk/total- -net/total- ---paging-- -most-expensive-
usr sys idl wai hiq siq|  cpu process   | read  writ| recv  send|  in   out | memory process
  9   2  80   9   0   0|kswapd         0| 123k  164k|   0     0 |9196B   18k|rsync        74M
  2   3  95   0   0   0|sendmail       1|   0   168k|2584B   39k|   0     0 |rsync        74M
 18   3  79   0   0   0|httpd         17|   0    88k|5759B  118k|   0     0 |rsync        74M
  3   2  94   1   0   0|sendmail       1|4096B    0 |2291B 4190B|   0     0 |rsync        74M
  2   3  95   0   0   0|httpd          1|   0     0 |2871B 3201B|   0     0 |rsync        74M
 10   7  83   0   0   0|httpd         13|   0     0 |2216B   10k|   0     0 |rsync        74M
  2   2  96   0   0   0|                |   0    52k| 724B 2674B|   0     0 |rsync        74M

What process is using all my CPU, memory or I/O at 4:20 AM ?

Imagine the monitoring team notices strange peaks, a system engineer got a worthless message, the system was swapping extensively, a process got killed.

Something indicates the system is doing something unexpected but what is causing it and why ? As of now you can do:

screen dstat -tcy --top-cpu 120
screen dstat -tmgs --top-mem 120
screen dstat -tdi --top-io 120

to see what process is using the most CPU, the most memory and the most I/O resources.

And hopefully one day we can do:

dstat -tn --top-net 120
dstat -tn --top-x 120

Leave it running during the night and in the morning you can see the light.

How much ticks per second on my kernel ?

In some cases it can be useful to see how many ticks (timer interrupts) your kernel is producing. With older kernels this is a fixed number (usually 100, 250 or 1000) but on newer kernels the number can be dynamic.

Also on VMware virtual machines, the number of ticks can cause clock issues, so in that case if you want to see what is happening, you can simply do:

dstat -ti -I0 --snooze --debug

Dstat nowadays can also detect lost ticks (when the number of ticks do not match the time progress. This is useful to correlate VM issues with other problems.

What device is slowing down my system ?

A nice feature of Dstat is that it can show how many interrupts each of your devices is generating. The cpu stats already show this in percentage as hard interrupt and soft interrupt, and the sys stats shows the total number of interrupts, but the int stats go into detail. And you can specify exactly what IRQs you want to watch.

Many devices generate interrupts, especially when used at maximum capacity. Sometimes too many interrupts can slow down a system. If you want to correlate bad performance with hardware interrupts, you can run a command like:

dstat -tyif
dstat -tyi -I 12,58,iwlagn -f 5

Much like watch -n1 -d cat /proc/interrupts on steroids.

dstat -t -y -i -f

which then results in:

Sample output
[dag@rhun ~]$ dstat -t -y -i -f 5
-----time----- ---system-- -------------------interrupts------------------
  date/time   | int   csw |  1     9     12    14    15    58   177   185
13-08 21:52:53| 740   923 |   1     0    18     5     1    17     4   131
13-08 21:52:58|1491  2085 |   0     4   351     1     2    37     0    97
13-08 21:53:03|1464  1981 |   0     0   332     1     3    31     0    96
13-08 21:53:08|1343  1977 |   0     0   215     1     2    32     0    93
13-08 21:53:13|1145  1918 |   0     0    12     0     3    33     0    95

When having the following hardware:

[dag@rhun ~]$ cat /proc/interrupts
           CPU0
  0:  143766685    IO-APIC-edge  timer
  1:     374043    IO-APIC-edge  i8042
  9:     102564   IO-APIC-level  acpi
 12:    4481057    IO-APIC-edge  i8042
 14:    1192508    IO-APIC-edge  libata
 15:     358891    IO-APIC-edge  libata
 58:    4391819   IO-APIC-level  ipw2200
177:     993740   IO-APIC-level  Intel ICH6
185:   33542364   IO-APIC-level  yenta, uhci_hcd:usb1, eth0, i915@pci:0000:00:02.0
NMI:          0
LOC:  143766578
ERR:          0
MIS:          0

Or select specific interrupts:

dstat -t -y -i -I 12,58,185 -f 5

Another possibility is to use the --top-int plugin, showing you the most frequent plugin on your system:

[dag@rhun ~]# dstat -t --top-int
----system---- ---most-frequent----
     time     |     interrupt
11-06 08:34:53|ahci              5
11-06 08:34:54|i8042            69
11-06 08:34:55|i8042            45
11-06 08:34:56|ehci/usb2        12
11-06 08:34:57|

How does my WIFI signal evolve when I move my laptop or AP through the house ?

Something I was looking into when trying to find the optimal location for the WIFI access point. However I must say that another tool I wrote Dwscan is currently more sophisticated.

dstat -t --wifi

Is my SWRAID performing as it claims ?

You can monitor I/O throughput for any block device. By default dstat limits itself to real block devices to prevent having the same I/O to be counted more than once, but if you want to monitor a SWRAID device, or a multipath device, you can simply do that by doing:

dstat -td -D md0,md1,sda,sdb,hda

Writing your own Dstat plugin

Dstat is completely written in python and this makes it extremely convenient to write your own plugins. The many plugins that come with Dstat are an excellent source of information if you want to write your own.

Introducing the hello world plugin

The following plugin does nothing more than write "Hello world!" to its output.

The dstat_helloworld plugin in its full glory.
class dstat_helloworld(dstat):
    ``"
    Example "Hello world!" output plugin for aspiring Dstat developers.
    ``"
    def __init__(self):
        self.name = 'plugin title'          <1>
        self.nick = ('counter',)            <2>
        self.vars = ('text',)               <3>
        self.type = 's'                     <4>
        self.width = 12                     <5>
        self.scale = 0                      <6>

    def extract(self):
        self.val['text'] = 'Hello world!'   <7>

In this example, there are several components:

  1. self.name contains the plugin’s visible title.

  2. self.nick is a list of the counter names

  3. self.vars is a list of the variable names for each counter

  4. self.type defines the counter type: string, percentage, integer, float

  5. self.width defines the column width

  6. self.scale influences the coloring and unit type

  7. self.val contains the counter values that are being displayed

Parsing counters

The following example shows how information is collected and counters are processed. It also includes a check() method to properly bail out when the system fails to meet some plugin criteria.

The dstat_postfix plugin
class dstat_postfix(dstat):
    def __init__(self):
        self.name = 'postfix'
        self.nick = ('inco', 'actv', 'dfrd', 'bnce', 'defr')
        self.vars = ('incoming', 'active', 'deferred', 'bounce', 'defer')
        self.type = 'd'                                                    <1>
        self.width = 4
        self.scale = 100

    def check(self):                                                       <2>
        if not os.access('/var/spool/postfix/active', os.R_OK):
            raise Exception, 'Cannot access postfix queues'

    def extract(self):
        for item in self.vars:                                             <3>
            self.val[item] = len(glob.glob('/var/spool/postfix/'+item+'/*/*')

This example shows the following items:

  1. type, width and scale specify decimal, column width a,d coloring based on multiplication of 100

  2. The check() method tests conditions and bails out of they are not met

  3. To make processing easier we have opted to use as value names (self.vars) the name of the postfix queues and store counts in self.val

Opening files

Dstat provides its own dopen() function to plugins. Using dopen() instead of open() plugins do not need to reopen files to update their counters. But this is only useful when plugins open a few files. For eg. opening /proc/pid files the number of open files would only be increasing as the number of processes increases.

Piping to an application

Dstat provides its own dpopen() function to plugins. This function allows the plugin to open stdin, stdout and stderr pipes for 2-way communication with processes. To see this in action, take a look at the dstat_gpfs plugins or the dstat_mysql plugins.

Piping to an application is more expensive than getting kernel counters from /proc, but it beats having to run a program and capturing the output.

Known issues

There are some known issues that are important to understand when using Dstat.

Writing Dstat and plugins in C

It makes sense to reimplement Dstat or some of its plugins in C and still allow the writing of Python (or even Perl) plugins. Tests have shown that for example processing /proc/pid in C makes the plugin 3 times faster. And this did not take into account the processing of the results and displaying the output.

So rewriting in C makes a lot of sense, but it is also much more complicated.

Python 1.5

There used to be a Python 1.5 version of Dstat, but with RHEL2 going out of support in 2009 I decided to no longer spend the extra effort to sync and test the Dstat15 version.

Leaving Python 1.5 behind means that plugins do not longer have to be compatible with Python 1.5 either. It is no coincedence that after this event a major overhaul was made to the plugin interface.

Counter rollovers

Unfortunately Dstat is susceptible for counters that “rollover”. This means that a counter gets bigger than its maximum value the data-structure is capable of storing. As a result the counter is reset.

For some architectures and some counters, Linux implements 32bit values, this means that such counter can go up to 2^32 (= 4294967296B = 4G) values.

For example the network counters are calculated in absolute bytes. Every 4GB that is being transferred over the network will cause a counter reset. For example on a bonded 2x10Gbps interfaces that is using its theoretical transfer limit, this would happen every 1.6 seconds.

Since /proc is updated every second, this would be impossible for Dstat to catch. Currently if Dstat encounters a negative difference for an interval it assumes a single rollover has happened and compensates for it. If that assumption is wrong, the user is working with wrong counters nonetheless.

If you suspect that the behaviour of your system is susceptible of counter rollovers, make sure you take this into account when using Dstat (or any other tool that uses these counters for that matter).

Tip
Shipped with the Dstat documentation there is a document (counter-rollovers.txt) that goes deeper into counter rollovers. If this affects you, read that document and contact me for possible implementation changes to improve handling them.

Dstat performance

As mentioned several times now, Dstat is written in python. There are various reasons that Python was chosen and the most important reason is that we target system engineers and users, so we need to simplify writing plugins, processing counters and lowers the bar for people to contribute changes.

The downside of choosing a scripting language is that it is slower than if it would be written in C, obviously. Dstat is not optimised for performance.

Note
This may seem ironic: a performance monitoring tool that is not optimised for performance, but rather for flexibility. However the ease of writing plugins and prototyping gets precedence over performance at this time. On the other hand we have pretty good tools to measure the overhead of a single plugin and profiling infrastructure to counter any excuses for sloppy plugin development.

Plugin performance

If we look at the basic plugins, there are no real performance issues with Dstat. Loading Dstat takes longer to start than eg. vmstat, but once running, Dstat’s performance for the same functionality is up to par with vmstat, ifstat and other similar tools.

However there are some plugins that are much more resource intensive than others and the selection of plugins determines Dstat’s performance in a major way.

Performance monitoring Dstat

Dstat comes with some plugins (starting with dstat_) to check the overhead of itself, this together with the selection of plugins makes it very convenient to measure the overhead of individual plugins. The following options exist (as plugins):

--dstat

Provides cputime and latency information for Dstat. This plugin can help you determine how accurate and how much overhead Dstat has with its current plugins enabled.

--dstat-cpu

Provides cpu utilization (user-space and kernel-space) statistics for Dstat. This plugin can help determine where there is some room for improvement for individual plugins (or Dstat itself).

--dstat-ctxt

Provides context switch information for Dstat. Both voluntary as well ass involuntary context switches are shown, providing you with some idea of how the system is providing timeslices and how Dstat is returning the cpu to the system.

--dstat-mem

Provides memory information about the Dstat process. This plugin enables plugin developers to determine whether Dstat is increasing its memory usage and therefor is leaking memory over time. This plugin proved very useful in optimizing memory usage of the top-plugins, which typically scan all processes.

--snooze

This plugin shows in milliseconds how much time is deviating from the previous run. Which is influenced by the time it takes for earlier stats to be calculated. So the output of this plugin is very dependant on the location on the command-line.

--debug

This option is not a plugin, but internal to Dstat. It will cause Dstat to show the actual time in milliseconds from start to end at the end of each line. This should be more or less close to the output of the dstat_dstat and dstat_dstat_cpu plugins.

It also influences the internal dstat_time plugin to show milliseconds instead of seconds, which may help showing the accuracy of Dstat itself.

--profile

Ths option is also not a plugin, but internal to Dstat. It provides you with detailed profiling information at the end of each run. The default settings can be changed inside Dstat (or a copy) to tweak the output you are looking for. It creates a termporary profiling file in the current directory when running, but will clean it up after exit.

Measuring plugins

Here is a small example of how one can measure the impact of a plugin.

The cost of running the timer plugin
[dag@rhun dag]$ dstat -t --debug
Module dstat_time
-----time-----
  date/time
19-08 20:34:21  5.90ms
19-08 20:34:22  0.17ms
19-08 20:34:23  0.18ms
19-08 20:34:24  0.18ms

Compare this with other plugins to see what the cost is of an individual plugin.

The cost of running the dstat_cpu plugin
[dag@rhun dstat]$ dstat -c --debug
Module dstat_cpu requires ['/proc/stat']
----total-cpu-usage----
usr sys idl wai hiq siq
 15   3  77   4   0   1 11.07ms
  5   3  92   0   0   0  0.66ms
  5   4  91   0   0   0  0.65ms
  5   3  92   0   0   0  0.66ms

As you can see, getting the CPU counters and calculating the CPU usage takes up 0.5 milliseconds on this particular system. But if we look at the usage of the dstat_top_cpu plugin:

The cost of running the dstat_top_cpu plugin
[dag@rhun dstat]$ dstat --top-cpu --debug
Module dstat_top_cpu
-most-expensive-
  cpu process
Xorg           2 43.82ms
Xorg           1 33.23ms
firefox-bin    2 33.54ms
Xorg           1 33.24ms

we see that processing the /proc/pid files causes the top-cpu plugin to use an additional 33ms.

Warning
These values show the time it takes to process the plugins and does not indicate the amount of CPU usage Dstat consumes. This obviously means that the process time of plugins depends on how much the system is being stressed as well as on what the plugin exactly is doing.

Plugins that communicate with other processes or those that process lots of information (eg. communicating with the mysql client, or processing the mail queue) may not actually use any local resources, but the latency causes Dstat to slow down processing other counters.

Future development

The Dstat release contains a TODO file highlighting all the items and ideas that have been played with. Here is a list of the most important ones:

  • Output

    • Changes in how Dstat colours digits within a value (the 6 in 6134B)

  • Exporting information

    • Connecting Dstat with rrdtool

    • Exporting to syslog or remote syslog (a way to transport counters ?)

  • Plugins

    • Be smart when plugins are loaded more than once (some plugins could benefit)

    • Add more plugins

  • Redesign Dstat

    • Create an object-model and namespace for plugins and counters so that other tools can be based on Dstat


dstat-0.7.4/docs/dstat.1000066400000000000000000000412771351755116500150160ustar00rootroot00000000000000'\" t .\" Title: dstat .\" Author: Dag Wieers .\" Generator: DocBook XSL Stylesheets v1.75.2 .\" Date: August 2014 .\" Manual: \ \& .\" Source: \ \& 0.7.3 .\" Language: English .\" .TH "DSTAT" "1" "August 2014" "\ \& 0\&.7\&.3" "\ \&" .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" dstat \- versatile tool for generating system resource statistics .SH "SYNOPSIS" .sp dstat [\-afv] [options\&.\&.] [delay [count]] .SH "DESCRIPTION" .sp Dstat is a versatile replacement for vmstat, iostat and ifstat\&. Dstat overcomes some of the limitations and adds some extra features\&. .sp Dstat allows you to view all of your system resources instantly, you can eg\&. compare disk usage in combination with interrupts from your IDE controller, or compare the network bandwidth numbers directly with the disk throughput (in the same interval)\&. .sp Dstat also cleverly gives you the most detailed information in columns and clearly indicates in what magnitude and unit the output is displayed\&. Less confusion, less mistakes, more efficient\&. .sp Dstat is unique in letting you aggregate block device throughput for a certain diskset or network bandwidth for a group of interfaces, ie\&. you can see the throughput for all the block devices that make up a single filesystem or storage system\&. .sp Dstat allows its data to be directly written to a CSV file to be imported and used by OpenOffice, Gnumeric or Excel to create graphs\&. .if n \{\ .sp .\} .RS 4 .it 1 an-trap .nr an-no-space-flag 1 .nr an-break-flag 1 .br .ps +1 \fBNote\fR .ps -1 .br .sp Users of Sleuthkit might find Sleuthkit\(cqs dstat being renamed to datastat to avoid a name conflict\&. See Debian bug #283709 for more information\&. .sp .5v .RE .SH "OPTIONS" .PP \-c, \-\-cpu .RS 4 enable cpu stats (system, user, idle, wait), for more CPU related stats also see \fB\-\-cpu\-adv\fR and \fB\-\-cpu\-use\fR .RE .PP \-C 0,3,total .RS 4 include cpu0, cpu3 and total (when using \-c/\-\-cpu); use \fIall\fR to show all CPUs .RE .PP \-d, \-\-disk .RS 4 enable disk stats (read, write), for more disk related stats look into the other \fB\-\-disk\fR plugins .RE .PP \-D total,hda .RS 4 include total and hda (when using \-d/\-\-disk) .RE .PP \-g, \-\-page .RS 4 enable page stats (page in, page out) .RE .PP \-i, \-\-int .RS 4 enable interrupt stats .RE .PP \-I 5,10 .RS 4 include interrupt 5 and 10 (when using \-i/\-\-int) .RE .PP \-l, \-\-load .RS 4 enable load average stats (1 min, 5 mins, 15mins) .RE .PP \-m, \-\-mem .RS 4 enable memory stats (used, buffers, cache, free); for more memory related stats also try \fB\-\-mem\-adv\fR and \fB\-\-swap\fR .RE .PP \-n, \-\-net .RS 4 enable network stats (receive, send) .RE .PP \-N eth1,total .RS 4 include eth1 and total (when using \-n/\-\-net) .RE .PP \-p, \-\-proc .RS 4 enable process stats (runnable, uninterruptible, new) .RE .PP \-r, \-\-io .RS 4 enable I/O request stats (read, write requests) .RE .PP \-s, \-\-swap .RS 4 enable swap stats (used, free) .RE .PP \-S swap1,total .RS 4 include swap1 and total (when using \-s/\-\-swap) .RE .PP \-t, \-\-time .RS 4 enable time/date output .RE .PP \-T, \-\-epoch .RS 4 enable time counter (seconds since epoch) .RE .PP \-y, \-\-sys .RS 4 enable system stats (interrupts, context switches) .RE .PP \-\-aio .RS 4 enable aio stats (asynchronous I/O) .RE .PP \-\-cpu\-adv .RS 4 enable advanced cpu stats .RE .PP \-\-cpu\-use .RS 4 enable only cpu usage stats .RE .PP \-\-fs, \-\-filesystem .RS 4 enable filesystem stats (open files, inodes) .RE .PP \-\-ipc .RS 4 enable ipc stats (message queue, semaphores, shared memory) .RE .PP \-\-lock .RS 4 enable file lock stats (posix, flock, read, write) .RE .PP \-\-mem\-adv .RS 4 enable advanced memory stats .RE .PP \-\-raw .RS 4 enable raw stats (raw sockets) .RE .PP \-\-socket .RS 4 enable socket stats (total, tcp, udp, raw, ip\-fragments) .RE .PP \-\-tcp .RS 4 enable tcp stats (listen, established, syn, time_wait, close) .RE .PP \-\-udp .RS 4 enable udp stats (listen, active) .RE .PP \-\-unix .RS 4 enable unix stats (datagram, stream, listen, active) .RE .PP \-\-vm .RS 4 enable vm stats (hard pagefaults, soft pagefaults, allocated, free) .RE .PP \-\-vm\-adv .RS 4 enable advance vm stats (steal, scanK, scanD, pgoru, astll) .RE .PP \-\-zones .RS 4 enable zoneinfo stats (d32F, d32H, normF, normH) .RE .PP \-\-plugin\-name .RS 4 enable (external) plugins by plugin name, see \fBPLUGINS\fR for options .RE .PP Possible internal stats are .RS 4 aio, cpu, cpu24, cpu\-adv, cpu\-use, disk, disk24, disk24\-old, epoch, fs, int, int24, io, ipc, load, lock, mem, mem\-adv, net, page, page24, proc, raw, socket, swap, swap\-old, sys, tcp, time, udp, unix, vm, vm\-adv, zones .RE .PP \-\-list .RS 4 list the internal and external plugin names .RE .PP \-a, \-\-all .RS 4 equals \-cdngy (default) .RE .PP \-f, \-\-full .RS 4 expand \-C, \-D, \-I, \-N and \-S discovery lists .RE .PP \-v, \-\-vmstat .RS 4 equals \-pmgdsc \-D total .RE .PP \-\-bits .RS 4 force bits for values expressed in bytes .RE .PP \-\-float .RS 4 force float values on screen (mutual exclusive with \fB\-\-integer\fR) .RE .PP \-\-integer .RS 4 force integer values on screen (mutual exclusive with \fB\-\-float\fR) .RE .PP \-\-bw, \-\-blackonwhite .RS 4 change colors for white background terminal .RE .PP \-\-nocolor .RS 4 disable colors .RE .PP \-\-noheaders .RS 4 disable repetitive headers .RE .PP \-\-noupdate .RS 4 disable intermediate updates when delay > 1 .RE .PP \-\-output file .RS 4 write CSV output to file .RE .PP \-\-profile .RS 4 show profiling statistics when exiting dstat .RE .SH "PLUGINS" .sp While anyone can create their own dstat plugins (and contribute them) dstat ships with a number of plugins already that extend its capabilities greatly\&. Here is an overview of the plugins dstat ships with: .PP \-\-battery .RS 4 battery in percentage (needs ACPI) .RE .PP \-\-battery\-remain .RS 4 battery remaining in hours, minutes (needs ACPI) .RE .PP \-\-cpufreq .RS 4 CPU frequency in percentage (needs ACPI) .RE .PP \-\-dbus .RS 4 number of dbus connections (needs python\-dbus) .RE .PP \-\-disk\-avgqu .RS 4 average queue length of the requests that were issued to the device .RE .PP \-\-disk\-avgrq .RS 4 average size (in sectors) of the requests that were issued to the device .RE .PP \-\-disk\-svctm .RS 4 average service time (in milliseconds) for I/O requests that were issued to the device .RE .PP \-\-disk\-tps .RS 4 number of transfers per second that were issued to the device .RE .PP \-\-disk\-util .RS 4 percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device) .RE .PP \-\-disk\-wait .RS 4 average time (in milliseconds) for I/O requests issued to the device to be served .RE .PP \-\-dstat .RS 4 show dstat cputime consumption and latency .RE .PP \-\-dstat\-cpu .RS 4 show dstat advanced cpu usage .RE .PP \-\-dstat\-ctxt .RS 4 show dstat context switches .RE .PP \-\-dstat\-mem .RS 4 show dstat advanced memory usage .RE .PP \-\-fan .RS 4 fan speed (needs ACPI) .RE .PP \-\-freespace .RS 4 per filesystem disk usage .RE .PP \-\-gpfs .RS 4 GPFS read/write I/O (needs mmpmon) .RE .PP \-\-gpfs\-ops .RS 4 GPFS filesystem operations (needs mmpmon) .RE .PP \-\-helloworld .RS 4 Hello world example dstat plugin .RE .PP \-\-innodb\-buffer .RS 4 show innodb buffer stats .RE .PP \-\-innodb\-io .RS 4 show innodb I/O stats .RE .PP \-\-innodb\-ops .RS 4 show innodb operations counters .RE .PP \-\-lustre .RS 4 show lustre I/O throughput .RE .PP \-\-md\-status .RS 4 show software raid (md) progress and speed .RE .PP \-\-memcache\-hits .RS 4 show the number of hits and misses from memcache .RE .PP \-\-mysql5\-cmds .RS 4 show the MySQL5 command stats .RE .PP \-\-mysql5\-conn .RS 4 show the MySQL5 connection stats .RE .PP \-\-mysql5\-innodb .RS 4 show the MySQL5 innodb stats .RE .PP \-\-mysql5\-io .RS 4 show the MySQL5 I/O stats .RE .PP \-\-mysql5\-keys .RS 4 show the MySQL5 keys stats .RE .PP \-\-mysql\-io .RS 4 show the MySQL I/O stats .RE .PP \-\-mysql\-keys .RS 4 show the MySQL keys stats .RE .PP \-\-net\-packets .RS 4 show the number of packets received and transmitted .RE .PP \-\-nfs3 .RS 4 show NFS v3 client operations .RE .PP \-\-nfs3\-ops .RS 4 show extended NFS v3 client operations .RE .PP \-\-nfsd3 .RS 4 show NFS v3 server operations .RE .PP \-\-nfsd3\-ops .RS 4 show extended NFS v3 server operations .RE .PP \-\-nfsd4\-ops .RS 4 show extended NFS v4 server operations .RE .PP \-\-nfsstat4 .RS 4 show NFS v4 stats .RE .PP \-\-ntp .RS 4 show NTP time from an NTP server .RE .PP \-\-postfix .RS 4 show postfix queue sizes (needs postfix) .RE .PP \-\-power .RS 4 show power usage .RE .PP \-\-proc\-count .RS 4 show total number of processes .RE .PP \-\-qmail .RS 4 show qmail queue sizes (needs qmail) .RE .sp \-\-redis: show redis stats .PP \-\-rpc .RS 4 show RPC client calls stats .RE .PP \-\-rpcd .RS 4 show RPC server calls stats .RE .PP \-\-sendmail .RS 4 show sendmail queue size (needs sendmail) .RE .PP \-\-snmp\-cpu .RS 4 show CPU stats using SNMP from DSTAT_SNMPSERVER .RE .PP \-\-snmp\-load .RS 4 show load stats using SNMP from DSTAT_SNMPSERVER .RE .PP \-\-snmp\-mem .RS 4 show memory stats using SNMP from DSTAT_SNMPSERVER .RE .PP \-\-snmp\-net .RS 4 show network stats using SNMP from DSTAT_SNMPSERVER .RE .sp \-\-snmp\-net\-err: show network errors using SNMP from DSTAT_SNMPSERVER .PP \-\-snmp\-sys .RS 4 show system stats (interrupts and context switches) using SNMP from DSTAT_SNMPSERVER .RE .PP \-\-snooze .RS 4 show number of ticks per second .RE .PP \-\-squid .RS 4 show squid usage statistics .RE .PP \-\-test .RS 4 show test plugin output .RE .PP \-\-thermal .RS 4 system temperature sensors .RE .PP \-\-top\-bio .RS 4 show most expensive block I/O process .RE .PP \-\-top\-bio\-adv .RS 4 show most expensive block I/O process (incl\&. pid and other stats) .RE .PP \-\-top\-childwait .RS 4 show process waiting for child the most .RE .PP \-\-top\-cpu .RS 4 show most expensive CPU process .RE .PP \-\-top\-cpu\-adv .RS 4 show most expensive CPU process (incl\&. pid and other stats) .RE .PP \-\-top\-cputime .RS 4 show process using the most CPU time (in ms) .RE .PP \-\-top\-cputime\-avg .RS 4 show process with the highest average timeslice (in ms) .RE .PP \-\-top\-int .RS 4 show most frequent interrupt .RE .PP \-\-top\-io .RS 4 show most expensive I/O process .RE .PP \-\-top\-io\-adv .RS 4 show most expensive I/O process (incl\&. pid and other stats) .RE .PP \-\-top\-latency .RS 4 show process with highest total latency (in ms) .RE .PP \-\-top\-latency\-avg .RS 4 show process with the highest average latency (in ms) .RE .PP \-\-top\-mem .RS 4 show process using the most memory .RE .PP \-\-top\-oom .RS 4 show process that will be killed by OOM the first .RE .PP \-\-utmp .RS 4 show number of utmp connections (needs python\-utmp) .RE .PP \-\-vm\-cpu .RS 4 show VMware CPU stats from hypervisor .RE .PP \-\-vm\-mem .RS 4 show VMware memory stats from hypervisor .RE .PP \-\-vm\-mem\-adv .RS 4 show advanced VMware memory stats from hypervisor .RE .PP \-\-vmk\-hba .RS 4 show VMware ESX kernel vmhba stats .RE .PP \-\-vmk\-int .RS 4 show VMware ESX kernel interrupt stats .RE .PP \-\-vmk\-nic .RS 4 show VMware ESX kernel port stats .RE .PP \-\-vz\-cpu .RS 4 show CPU usage per OpenVZ guest .RE .PP \-\-vz\-io .RS 4 show I/O usage per OpenVZ guest .RE .PP \-\-vz\-ubc .RS 4 show OpenVZ user beancounters .RE .PP \-\-wifi .RS 4 wireless link quality and signal to noise ratio .RE .PP \-\-zfs\-arc .RS 4 show ZFS arc stats .RE .PP \-\-zfs\-l2arc .RS 4 show ZFS l2arc stats .RE .PP \-\-zfs\-zil .RS 4 show ZFS zil stats .RE .SH "ARGUMENTS" .sp \fBdelay\fR is the delay in seconds between each update .sp \fBcount\fR is the number of updates to display before exiting .sp The default delay is 1 and count is unspecified (unlimited) .SH "INTERMEDIATE UPDATES" .sp When invoking dstat with a \fBdelay\fR greater than 1 and without the \fB\-\-noupdate\fR option, it will show intermediate updates, ie\&. the first time a 1 sec average, the second update a 2 second average, etc\&. until the delay has been reached\&. .sp So in case you specified a delay of 10, \fBthe 9 intermediate updates are NOT snapshots\fR, they are averages over the time that passed since the last final update\&. The end result is that you get a 10 second average on a new line, just like with vmstat\&. .SH "EXAMPLES" .sp Using dstat to relate disk\-throughput with network\-usage (eth0), total CPU\-usage and system counters: .sp .if n \{\ .RS 4 .\} .nf dstat \-dnyc \-N eth0 \-C total \-f 5 .fi .if n \{\ .RE .\} .sp Checking dstat\(cqs behaviour and the system impact of dstat: .sp .if n \{\ .RS 4 .\} .nf dstat \-taf \-\-debug .fi .if n \{\ .RE .\} .sp Using the time plugin together with cpu, net, disk, system, load, proc and top_cpu plugins: .sp .if n \{\ .RS 4 .\} .nf dstat \-tcndylp \-\-top\-cpu .fi .if n \{\ .RE .\} .sp this is identical to .sp .if n \{\ .RS 4 .\} .nf dstat \-\-time \-\-cpu \-\-net \-\-disk \-\-sys \-\-load \-\-proc \-\-top\-cpu .fi .if n \{\ .RE .\} .sp Using dstat to relate advanced cpu stats with interrupts per device: .sp .if n \{\ .RS 4 .\} .nf dstat \-t \-\-cpu\-adv \-yif .fi .if n \{\ .RE .\} .SH "BUGS" .sp Since it is practically impossible to test dstat on every possible permutation of kernel, python or distribution version, I need your help and your feedback to fix the remaining problems\&. If you have improvements or bugreports, please send them to: \m[blue]\fBdag@wieers\&.com\fR\m[]\&\s-2\u[1]\d\s+2 .if n \{\ .sp .\} .RS 4 .it 1 an-trap .nr an-no-space-flag 1 .nr an-break-flag 1 .br .ps +1 \fBNote\fR .ps -1 .br .sp Please see the TODO file for known bugs and future plans\&. .sp .5v .RE .SH "FILES" .sp Paths that may contain external dstat_*\&.py plugins: .sp .if n \{\ .RS 4 .\} .nf ~/\&.dstat/ (path of binary)/plugins/ /usr/share/dstat/ /usr/local/share/dstat/ .fi .if n \{\ .RE .\} .SH "ENVIRONMENT VARIABLES" .sp Dstat will read additional command line arguments from the environment variable \fBDSTAT_OPTS\fR\&. You can use this to configure Dstat\(cqs default behavior, e\&.g\&. if you have a black\-on\-white terminal: .sp .if n \{\ .RS 4 .\} .nf export DSTAT_OPTS="\-\-bw \-\-noupdate" .fi .if n \{\ .RE .\} .sp Other internal or external plugins have their own environment variables to influence their behavior, e\&.g\&. .sp .if n \{\ .RS 4 .\} .nf DSTAT_NTPSERVER .fi .if n \{\ .RE .\} .sp .if n \{\ .RS 4 .\} .nf DSTAT_MYSQL DSTAT_MYSQL_HOST DSTAT_MYSQL_PORT DSTAT_MYSQL_SOCKET DSTAT_MYSQL_USER DSTAT_MYSQL_PWD .fi .if n \{\ .RE .\} .sp .if n \{\ .RS 4 .\} .nf DSTAT_SNMPSERVER DSTAT_SNMPCOMMUNITY .fi .if n \{\ .RE .\} .sp .if n \{\ .RS 4 .\} .nf DSTAT_SQUID_OPTS .fi .if n \{\ .RE .\} .sp .if n \{\ .RS 4 .\} .nf DSTAT_TIMEFMT .fi .if n \{\ .RE .\} .SH "SEE ALSO" .SS "Performance tools" .sp .if n \{\ .RS 4 .\} .nf htop(1), ifstat(1), iftop(8), iostat(1), mpstat(1), netstat(8), nfsstat(8), perf(1), powertop(1), rtacct(8), top(1), vmstat(8), xosview(1) .fi .if n \{\ .RE .\} .SS "Process tracing" .sp .if n \{\ .RS 4 .\} .nf lslk(8), lsof(8), ltrace(1), pidstat(1), pmap(1), ps(1), pstack(1), strace(1) .fi .if n \{\ .RE .\} .SS "Binary debugging" .sp .if n \{\ .RS 4 .\} .nf ldd(1), file(1), nm(1), objdump(1), readelf(1) .fi .if n \{\ .RE .\} .SS "Memory usage tools" .sp .if n \{\ .RS 4 .\} .nf free(1), memusage, memusagestat, ps_mem(1), slabtop(1), smem(8) .fi .if n \{\ .RE .\} .SS "Accounting tools" .sp .if n \{\ .RS 4 .\} .nf acct(2), dump\-acct(8), dump\-utmp(8), lastcomm(1), sa(8) .fi .if n \{\ .RE .\} .SS "Hardware debugging tools" .sp .if n \{\ .RS 4 .\} .nf dmidecode(8), ifinfo(1), lsdev(1), lshal(1), lshw(1), lsmod(8), lspci(8), lsusb(8), numactl(8), smartctl(8), turbostat(8), x86info(1) .fi .if n \{\ .RE .\} .SS "Application debugging" .sp .if n \{\ .RS 4 .\} .nf mailstats(8), qshape(1) .fi .if n \{\ .RE .\} .SS "Xorg related tools" .sp .if n \{\ .RS 4 .\} .nf xdpyinfo(1), xrestop(1) .fi .if n \{\ .RE .\} .SS "Other useful info" .sp .if n \{\ .RS 4 .\} .nf collectl(1), proc(5), procinfo(8) .fi .if n \{\ .RE .\} .SH "AUTHOR" .sp Written by Dag Wieers \m[blue]\fBdag@wieers\&.com\fR\m[]\&\s-2\u[1]\d\s+2 .sp Homepage at \m[blue]\fBhttp://dag\&.wieers\&.com/home\-made/dstat/\fR\m[] .sp This manpage was initially written by Andrew Pollock \m[blue]\fBapollock@debian\&.org\fR\m[]\&\s-2\u[2]\d\s+2 for the Debian GNU/Linux system\&. .SH "AUTHOR" .PP \fBDag Wieers\fR <\&dag@wieers\&.com\&> .RS 4 Author. .RE .SH "NOTES" .IP " 1." 4 dag@wieers.com .RS 4 \%mailto:dag@wieers.com .RE .IP " 2." 4 apollock@debian.org .RS 4 \%mailto:apollock@debian.org .RE dstat-0.7.4/docs/dstat.1.adoc000066400000000000000000000320711351755116500157130ustar00rootroot00000000000000= dstat(1) Dag Wieers v0.7.3, August 2014 == NAME dstat - versatile tool for generating system resource statistics == SYNOPSIS dstat [-afv] [options..] [delay [count]] == DESCRIPTION Dstat is a versatile replacement for vmstat, iostat and ifstat. Dstat overcomes some of the limitations and adds some extra features. Dstat allows you to view all of your system resources instantly, you can eg. compare disk usage in combination with interrupts from your IDE controller, or compare the network bandwidth numbers directly with the disk throughput (in the same interval). Dstat also cleverly gives you the most detailed information in columns and clearly indicates in what magnitude and unit the output is displayed. Less confusion, less mistakes, more efficient. Dstat is unique in letting you aggregate block device throughput for a certain diskset or network bandwidth for a group of interfaces, ie. you can see the throughput for all the block devices that make up a single filesystem or storage system. Dstat allows its data to be directly written to a CSV file to be imported and used by OpenOffice, Gnumeric or Excel to create graphs. [NOTE] Users of Sleuthkit might find Sleuthkit's dstat being renamed to datastat to avoid a name conflict. See Debian bug #283709 for more information. == OPTIONS -c, --cpu:: enable cpu stats (system, user, idle, wait), for more CPU related stats also see *--cpu-adv* and *--cpu-use* -C 0,3,total:: include cpu0, cpu3 and total (when using -c/--cpu); use 'all' to show all CPUs -d, --disk:: enable disk stats (read, write), for more disk related stats look into the other *--disk* plugins -D total,hda:: include total and hda (when using -d/--disk) -g, --page:: enable page stats (page in, page out) -i, --int:: enable interrupt stats -I 5,10:: include interrupt 5 and 10 (when using -i/--int) -l, --load:: enable load average stats (1 min, 5 mins, 15mins) -m, --mem:: enable memory stats (used, buffers, cache, free); for more memory related stats also try *--mem-adv* and *--swap* -n, --net:: enable network stats (receive, send) -N eth1,total:: include eth1 and total (when using -n/--net) -p, --proc:: enable process stats (runnable, uninterruptible, new) -r, --io:: enable I/O request stats (read, write requests) -s, --swap:: enable swap stats (used, free) -S swap1,total:: include swap1 and total (when using -s/--swap) -t, --time:: enable time/date output -T, --epoch:: enable time counter (seconds since epoch) -y, --sys:: enable system stats (interrupts, context switches) --aio:: enable aio stats (asynchronous I/O) --cpu-adv:: enable advanced cpu stats --cpu-use:: enable only cpu usage stats --fs, --filesystem:: enable filesystem stats (open files, inodes) --ipc:: enable ipc stats (message queue, semaphores, shared memory) --lock:: enable file lock stats (posix, flock, read, write) --mem-adv:: enable advanced memory stats --raw:: enable raw stats (raw sockets) --socket:: enable socket stats (total, tcp, udp, raw, ip-fragments) --tcp:: enable tcp stats (listen, established, syn, time_wait, close) --udp:: enable udp stats (listen, active) --unix:: enable unix stats (datagram, stream, listen, active) --vm:: enable vm stats (hard pagefaults, soft pagefaults, allocated, free) --vm-adv:: enable advance vm stats (steal, scanK, scanD, pgoru, astll) --zones:: enable zoneinfo stats (d32F, d32H, normF, normH) --:: enable (external) plugins by plugin name, see *PLUGINS* for options Possible internal stats are:: aio, cpu, cpu24, cpu-adv, cpu-use, disk, disk24, disk24-old, epoch, fs, int, int24, io, ipc, load, lock, mem, mem-adv, net, page, page24, proc, raw, socket, swap, swap-old, sys, tcp, time, udp, unix, vm, vm-adv, zones --list:: list the internal and external plugin names -a, --all:: equals -cdngy (default) -f, --full:: expand -C, -D, -I, -N and -S discovery lists -v, --vmstat:: equals -pmgdsc -D total --bits:: force bits for values expressed in bytes --float:: force float values on screen (mutual exclusive with *--integer*) --integer:: force integer values on screen (mutual exclusive with *--float*) --bw, --blackonwhite:: change colors for white background terminal --nocolor:: disable colors --noheaders:: disable repetitive headers --noupdate:: disable intermediate updates when delay > 1 --output file:: write CSV output to file --profile:: show profiling statistics when exiting dstat == PLUGINS While anyone can create their own dstat plugins (and contribute them) dstat ships with a number of plugins already that extend its capabilities greatly. Here is an overview of the plugins dstat ships with: --battery:: battery in percentage (needs ACPI) --battery-remain:: battery remaining in hours, minutes (needs ACPI) --cpufreq:: CPU frequency in percentage (needs ACPI) --dbus:: number of dbus connections (needs python-dbus) --disk-avgqu:: average queue length of the requests that were issued to the device --disk-avgrq:: average size (in sectors) of the requests that were issued to the device --disk-svctm:: average service time (in milliseconds) for I/O requests that were issued to the device --disk-tps:: number of transfers per second that were issued to the device --disk-util:: percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device) --disk-wait:: average time (in milliseconds) for I/O requests issued to the device to be served --dstat:: show dstat cputime consumption and latency --dstat-cpu:: show dstat advanced cpu usage --dstat-ctxt:: show dstat context switches --dstat-mem:: show dstat advanced memory usage --fan:: fan speed (needs ACPI) --freespace:: per filesystem disk usage --gpfs:: GPFS read/write I/O (needs mmpmon) --gpfs-ops:: GPFS filesystem operations (needs mmpmon) --helloworld:: Hello world example dstat plugin --innodb-buffer:: show innodb buffer stats --innodb-io:: show innodb I/O stats --innodb-ops:: show innodb operations counters --lustre:: show lustre I/O throughput --md-status:: show software raid (md) progress and speed --memcache-hits:: show the number of hits and misses from memcache --mysql5-cmds:: show the MySQL5 command stats --mysql5-conn:: show the MySQL5 connection stats --mysql5-innodb:: show the MySQL5 innodb stats --mysql5-io:: show the MySQL5 I/O stats --mysql5-keys:: show the MySQL5 keys stats --mysql-io:: show the MySQL I/O stats --mysql-keys:: show the MySQL keys stats --net-packets:: show the number of packets received and transmitted --nfs3:: show NFS v3 client operations --nfs3-ops:: show extended NFS v3 client operations --nfsd3:: show NFS v3 server operations --nfsd3-ops:: show extended NFS v3 server operations --nfsd4-ops:: show extended NFS v4 server operations --nfsstat4:: show NFS v4 stats --ntp:: show NTP time from an NTP server --postfix:: show postfix queue sizes (needs postfix) --power:: show power usage --proc-count:: show total number of processes --qmail:: show qmail queue sizes (needs qmail) --redis: show redis stats --rpc:: show RPC client calls stats --rpcd:: show RPC server calls stats --sendmail:: show sendmail queue size (needs sendmail) --snmp-cpu:: show CPU stats using SNMP from DSTAT_SNMPSERVER --snmp-load:: show load stats using SNMP from DSTAT_SNMPSERVER --snmp-mem:: show memory stats using SNMP from DSTAT_SNMPSERVER --snmp-net:: show network stats using SNMP from DSTAT_SNMPSERVER --snmp-net-err: show network errors using SNMP from DSTAT_SNMPSERVER --snmp-sys:: show system stats (interrupts and context switches) using SNMP from DSTAT_SNMPSERVER --snooze:: show number of ticks per second --squid:: show squid usage statistics --test:: show test plugin output --thermal:: system temperature sensors --top-bio:: show most expensive block I/O process --top-bio-adv:: show most expensive block I/O process (incl. pid and other stats) --top-childwait:: show process waiting for child the most --top-cpu:: show most expensive CPU process --top-cpu-adv:: show most expensive CPU process (incl. pid and other stats) --top-cputime:: show process using the most CPU time (in ms) --top-cputime-avg:: show process with the highest average timeslice (in ms) --top-int:: show most frequent interrupt --top-io:: show most expensive I/O process --top-io-adv:: show most expensive I/O process (incl. pid and other stats) --top-latency:: show process with highest total latency (in ms) --top-latency-avg:: show process with the highest average latency (in ms) --top-mem:: show process using the most memory --top-oom:: show process that will be killed by OOM the first --utmp:: show number of utmp connections (needs python-utmp) --vm-cpu:: show VMware CPU stats from hypervisor --vm-mem:: show VMware memory stats from hypervisor --vm-mem-adv:: show advanced VMware memory stats from hypervisor --vmk-hba:: show VMware ESX kernel vmhba stats --vmk-int:: show VMware ESX kernel interrupt stats --vmk-nic:: show VMware ESX kernel port stats --vz-cpu:: show CPU usage per OpenVZ guest --vz-io:: show I/O usage per OpenVZ guest --vz-ubc:: show OpenVZ user beancounters --wifi:: wireless link quality and signal to noise ratio --zfs-arc:: show ZFS arc stats --zfs-l2arc:: show ZFS l2arc stats --zfs-zil:: show ZFS zil stats == ARGUMENTS *delay* is the delay in seconds between each update *count* is the number of updates to display before exiting The default delay is 1 and count is unspecified (unlimited) == INTERMEDIATE UPDATES When invoking dstat with a *delay* greater than 1 and without the *--noupdate* option, it will show intermediate updates, ie. the first time a 1 sec average, the second update a 2 second average, etc. until the delay has been reached. So in case you specified a delay of 10, *the 9 intermediate updates are NOT snapshots*, they are averages over the time that passed since the last final update. The end result is that you get a 10 second average on a new line, just like with vmstat. == EXAMPLES Using dstat to relate disk-throughput with network-usage (eth0), total CPU-usage and system counters: ---- dstat -dnyc -N eth0 -C total -f 5 ---- Checking dstat's behaviour and the system impact of dstat: ---- dstat -taf --debug ---- Using the time plugin together with cpu, net, disk, system, load, proc and top_cpu plugins: ---- dstat -tcndylp --top-cpu ---- this is identical to ---- dstat --time --cpu --net --disk --sys --load --proc --top-cpu ---- Using dstat to relate advanced cpu stats with interrupts per device: ---- dstat -t --cpu-adv -yif ---- == BUGS Since it is practically impossible to test dstat on every possible permutation of kernel, python or distribution version, I need your help and your feedback to fix the remaining problems. If you have improvements or bugreports, please send them to: mailto:dag@wieers.com[] [NOTE] Please see the TODO file for known bugs and future plans. == FILES Paths that may contain external dstat_*.py plugins: ~/.dstat/ (path of binary)/plugins/ /usr/share/dstat/ /usr/local/share/dstat/ == ENVIRONMENT VARIABLES Dstat will read additional command line arguments from the environment variable *DSTAT_OPTS*. You can use this to configure Dstat's default behavior, e.g. if you have a black-on-white terminal: export DSTAT_OPTS="--bw --noupdate" Other internal or external plugins have their own environment variables to influence their behavior, e.g. DSTAT_NTPSERVER DSTAT_MYSQL DSTAT_MYSQL_HOST DSTAT_MYSQL_PORT DSTAT_MYSQL_SOCKET DSTAT_MYSQL_USER DSTAT_MYSQL_PWD DSTAT_SNMPSERVER DSTAT_SNMPCOMMUNITY DSTAT_SQUID_OPTS DSTAT_TIMEFMT == SEE ALSO === Performance tools htop(1), ifstat(1), iftop(8), iostat(1), mpstat(1), netstat(8), nfsstat(8), perf(1), powertop(1), rtacct(8), top(1), vmstat(8), xosview(1) === Process tracing lslk(8), lsof(8), ltrace(1), pidstat(1), pmap(1), ps(1), pstack(1), strace(1) === Binary debugging ldd(1), file(1), nm(1), objdump(1), readelf(1) === Memory usage tools free(1), memusage, memusagestat, ps_mem(1), slabtop(1), smem(8) === Accounting tools acct(2), dump-acct(8), dump-utmp(8), lastcomm(1), sa(8) === Hardware debugging tools dmidecode(8), ifinfo(1), lsdev(1), lshal(1), lshw(1), lsmod(8), lspci(8), lsusb(8), numactl(8), smartctl(8), turbostat(8), x86info(1) === Application debugging mailstats(8), qshape(1) === Xorg related tools xdpyinfo(1), xrestop(1) === Other useful info collectl(1), proc(5), procinfo(8) == AUTHOR Written by Dag Wieers mailto:dag@wieers.com[] Homepage at http://dag.wieers.com/home-made/dstat/[] This manpage was initially written by Andrew Pollock mailto:apollock@debian.org[] for the Debian GNU/Linux system. dstat-0.7.4/docs/dstat.1.html000066400000000000000000001216131351755116500157520ustar00rootroot00000000000000 dstat(1)

SYNOPSIS

dstat [-afv] [options..] [delay [count]]

DESCRIPTION

Dstat is a versatile replacement for vmstat, iostat and ifstat. Dstat overcomes some of the limitations and adds some extra features.

Dstat allows you to view all of your system resources instantly, you can eg. compare disk usage in combination with interrupts from your IDE controller, or compare the network bandwidth numbers directly with the disk throughput (in the same interval).

Dstat also cleverly gives you the most detailed information in columns and clearly indicates in what magnitude and unit the output is displayed. Less confusion, less mistakes, more efficient.

Dstat is unique in letting you aggregate block device throughput for a certain diskset or network bandwidth for a group of interfaces, ie. you can see the throughput for all the block devices that make up a single filesystem or storage system.

Dstat allows its data to be directly written to a CSV file to be imported and used by OpenOffice, Gnumeric or Excel to create graphs.

Note
Users of Sleuthkit might find Sleuthkit’s dstat being renamed to datastat to avoid a name conflict. See Debian bug #283709 for more information.

OPTIONS

-c, --cpu

enable cpu stats (system, user, idle, wait), for more CPU related stats also see --cpu-adv and --cpu-use

-C 0,3,total

include cpu0, cpu3 and total (when using -c/--cpu); use all to show all CPUs

-d, --disk

enable disk stats (read, write), for more disk related stats look into the other --disk plugins

-D total,hda

include total and hda (when using -d/--disk)

-g, --page

enable page stats (page in, page out)

-i, --int

enable interrupt stats

-I 5,10

include interrupt 5 and 10 (when using -i/--int)

-l, --load

enable load average stats (1 min, 5 mins, 15mins)

-m, --mem

enable memory stats (used, buffers, cache, free); for more memory related stats also try --mem-adv and --swap

-n, --net

enable network stats (receive, send)

-N eth1,total

include eth1 and total (when using -n/--net)

-p, --proc

enable process stats (runnable, uninterruptible, new)

-r, --io

enable I/O request stats (read, write requests)

-s, --swap

enable swap stats (used, free)

-S swap1,total

include swap1 and total (when using -s/--swap)

-t, --time

enable time/date output

-T, --epoch

enable time counter (seconds since epoch)

-y, --sys

enable system stats (interrupts, context switches)

--aio

enable aio stats (asynchronous I/O)

--cpu-adv

enable advanced cpu stats

--cpu-use

enable only cpu usage stats

--fs, --filesystem

enable filesystem stats (open files, inodes)

--ipc

enable ipc stats (message queue, semaphores, shared memory)

--lock

enable file lock stats (posix, flock, read, write)

--mem-adv

enable advanced memory stats

--raw

enable raw stats (raw sockets)

--socket

enable socket stats (total, tcp, udp, raw, ip-fragments)

--tcp

enable tcp stats (listen, established, syn, time_wait, close)

--udp

enable udp stats (listen, active)

--unix

enable unix stats (datagram, stream, listen, active)

--vm

enable vm stats (hard pagefaults, soft pagefaults, allocated, free)

--vm-adv

enable advance vm stats (steal, scanK, scanD, pgoru, astll)

--zones

enable zoneinfo stats (d32F, d32H, normF, normH)

--<plugin-name>

enable (external) plugins by plugin name, see PLUGINS for options

Possible internal stats are

aio, cpu, cpu24, cpu-adv, cpu-use, disk, disk24, disk24-old, epoch, fs, int, int24, io, ipc, load, lock, mem, mem-adv, net, page, page24, proc, raw, socket, swap, swap-old, sys, tcp, time, udp, unix, vm, vm-adv, zones

--list

list the internal and external plugin names

-a, --all

equals -cdngy (default)

-f, --full

expand -C, -D, -I, -N and -S discovery lists

-v, --vmstat

equals -pmgdsc -D total

--bits

force bits for values expressed in bytes

--float

force float values on screen (mutual exclusive with --integer)

--integer

force integer values on screen (mutual exclusive with --float)

--bw, --blackonwhite

change colors for white background terminal

--nocolor

disable colors

--noheaders

disable repetitive headers

--noupdate

disable intermediate updates when delay > 1

--output file

write CSV output to file

--profile

show profiling statistics when exiting dstat

PLUGINS

While anyone can create their own dstat plugins (and contribute them) dstat ships with a number of plugins already that extend its capabilities greatly. Here is an overview of the plugins dstat ships with:

--battery

battery in percentage (needs ACPI)

--battery-remain

battery remaining in hours, minutes (needs ACPI)

--cpufreq

CPU frequency in percentage (needs ACPI)

--dbus

number of dbus connections (needs python-dbus)

--disk-avgqu

average queue length of the requests that were issued to the device

--disk-avgrq

average size (in sectors) of the requests that were issued to the device

--disk-svctm

average service time (in milliseconds) for I/O requests that were issued to the device

--disk-tps

number of transfers per second that were issued to the device

--disk-util

percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device)

--disk-wait

average time (in milliseconds) for I/O requests issued to the device to be served

--dstat

show dstat cputime consumption and latency

--dstat-cpu

show dstat advanced cpu usage

--dstat-ctxt

show dstat context switches

--dstat-mem

show dstat advanced memory usage

--fan

fan speed (needs ACPI)

--freespace

per filesystem disk usage

--gpfs

GPFS read/write I/O (needs mmpmon)

--gpfs-ops

GPFS filesystem operations (needs mmpmon)

--helloworld

Hello world example dstat plugin

--innodb-buffer

show innodb buffer stats

--innodb-io

show innodb I/O stats

--innodb-ops

show innodb operations counters

--lustre

show lustre I/O throughput

--md-status

show software raid (md) progress and speed

--memcache-hits

show the number of hits and misses from memcache

--mysql5-cmds

show the MySQL5 command stats

--mysql5-conn

show the MySQL5 connection stats

--mysql5-innodb

show the MySQL5 innodb stats

--mysql5-io

show the MySQL5 I/O stats

--mysql5-keys

show the MySQL5 keys stats

--mysql-io

show the MySQL I/O stats

--mysql-keys

show the MySQL keys stats

--net-packets

show the number of packets received and transmitted

--nfs3

show NFS v3 client operations

--nfs3-ops

show extended NFS v3 client operations

--nfsd3

show NFS v3 server operations

--nfsd3-ops

show extended NFS v3 server operations

--nfsd4-ops

show extended NFS v4 server operations

--nfsstat4

show NFS v4 stats

--ntp

show NTP time from an NTP server

--postfix

show postfix queue sizes (needs postfix)

--power

show power usage

--proc-count

show total number of processes

--qmail

show qmail queue sizes (needs qmail)

--redis: show redis stats

--rpc

show RPC client calls stats

--rpcd

show RPC server calls stats

--sendmail

show sendmail queue size (needs sendmail)

--snmp-cpu

show CPU stats using SNMP from DSTAT_SNMPSERVER

--snmp-load

show load stats using SNMP from DSTAT_SNMPSERVER

--snmp-mem

show memory stats using SNMP from DSTAT_SNMPSERVER

--snmp-net

show network stats using SNMP from DSTAT_SNMPSERVER

--snmp-net-err: show network errors using SNMP from DSTAT_SNMPSERVER

--snmp-sys

show system stats (interrupts and context switches) using SNMP from DSTAT_SNMPSERVER

--snooze

show number of ticks per second

--squid

show squid usage statistics

--test

show test plugin output

--thermal

system temperature sensors

--top-bio

show most expensive block I/O process

--top-bio-adv

show most expensive block I/O process (incl. pid and other stats)

--top-childwait

show process waiting for child the most

--top-cpu

show most expensive CPU process

--top-cpu-adv

show most expensive CPU process (incl. pid and other stats)

--top-cputime

show process using the most CPU time (in ms)

--top-cputime-avg

show process with the highest average timeslice (in ms)

--top-int

show most frequent interrupt

--top-io

show most expensive I/O process

--top-io-adv

show most expensive I/O process (incl. pid and other stats)

--top-latency

show process with highest total latency (in ms)

--top-latency-avg

show process with the highest average latency (in ms)

--top-mem

show process using the most memory

--top-oom

show process that will be killed by OOM the first

--utmp

show number of utmp connections (needs python-utmp)

--vm-cpu

show VMware CPU stats from hypervisor

--vm-mem

show VMware memory stats from hypervisor

--vm-mem-adv

show advanced VMware memory stats from hypervisor

--vmk-hba

show VMware ESX kernel vmhba stats

--vmk-int

show VMware ESX kernel interrupt stats

--vmk-nic

show VMware ESX kernel port stats

--vz-cpu

show CPU usage per OpenVZ guest

--vz-io

show I/O usage per OpenVZ guest

--vz-ubc

show OpenVZ user beancounters

--wifi

wireless link quality and signal to noise ratio

--zfs-arc

show ZFS arc stats

--zfs-l2arc

show ZFS l2arc stats

--zfs-zil

show ZFS zil stats

ARGUMENTS

delay is the delay in seconds between each update

count is the number of updates to display before exiting

The default delay is 1 and count is unspecified (unlimited)

INTERMEDIATE UPDATES

When invoking dstat with a delay greater than 1 and without the --noupdate option, it will show intermediate updates, ie. the first time a 1 sec average, the second update a 2 second average, etc. until the delay has been reached.

So in case you specified a delay of 10, the 9 intermediate updates are NOT snapshots, they are averages over the time that passed since the last final update. The end result is that you get a 10 second average on a new line, just like with vmstat.

EXAMPLES

Using dstat to relate disk-throughput with network-usage (eth0), total CPU-usage and system counters:

dstat -dnyc -N eth0 -C total -f 5

Checking dstat’s behaviour and the system impact of dstat:

dstat -taf --debug

Using the time plugin together with cpu, net, disk, system, load, proc and top_cpu plugins:

dstat -tcndylp --top-cpu

this is identical to

dstat --time --cpu --net --disk --sys --load --proc --top-cpu

Using dstat to relate advanced cpu stats with interrupts per device:

dstat -t --cpu-adv -yif

BUGS

Since it is practically impossible to test dstat on every possible permutation of kernel, python or distribution version, I need your help and your feedback to fix the remaining problems. If you have improvements or bugreports, please send them to: dag@wieers.com

Note
Please see the TODO file for known bugs and future plans.

FILES

Paths that may contain external dstat_*.py plugins:

~/.dstat/
(path of binary)/plugins/
/usr/share/dstat/
/usr/local/share/dstat/

ENVIRONMENT VARIABLES

Dstat will read additional command line arguments from the environment variable DSTAT_OPTS. You can use this to configure Dstat’s default behavior, e.g. if you have a black-on-white terminal:

export DSTAT_OPTS="--bw --noupdate"

Other internal or external plugins have their own environment variables to influence their behavior, e.g.

DSTAT_NTPSERVER
DSTAT_MYSQL
DSTAT_MYSQL_HOST
DSTAT_MYSQL_PORT
DSTAT_MYSQL_SOCKET
DSTAT_MYSQL_USER
DSTAT_MYSQL_PWD
DSTAT_SNMPSERVER
DSTAT_SNMPCOMMUNITY
DSTAT_SQUID_OPTS
DSTAT_TIMEFMT

SEE ALSO

Performance tools

htop(1), ifstat(1), iftop(8), iostat(1), mpstat(1), netstat(8), nfsstat(8), perf(1), powertop(1), rtacct(8), top(1), vmstat(8), xosview(1)

Process tracing

lslk(8), lsof(8), ltrace(1), pidstat(1), pmap(1), ps(1), pstack(1), strace(1)

Binary debugging

ldd(1), file(1), nm(1), objdump(1), readelf(1)

Memory usage tools

free(1), memusage, memusagestat, ps_mem(1), slabtop(1), smem(8)

Accounting tools

acct(2), dump-acct(8), dump-utmp(8), lastcomm(1), sa(8)

Hardware debugging tools

dmidecode(8), ifinfo(1), lsdev(1), lshal(1), lshw(1), lsmod(8), lspci(8), lsusb(8), numactl(8), smartctl(8), turbostat(8), x86info(1)

Application debugging

mailstats(8), qshape(1)
xdpyinfo(1), xrestop(1)

Other useful info

collectl(1), proc(5), procinfo(8)

AUTHOR

Written by Dag Wieers dag@wieers.com

This manpage was initially written by Andrew Pollock apollock@debian.org for the Debian GNU/Linux system.


dstat-0.7.4/docs/examples.adoc000066400000000000000000000010371351755116500162510ustar00rootroot00000000000000= Dstat examples I've written a few examples that make use of the Dstat classes. The following examples currently exist: read.py - shows how to access dstat data mstat.py - small sub-second ministat tool Please send other examples or tools that make use of Dstat classes or changes to extend the current infrastructure. I'm not particularly happy with the current interface to Dstat, so any hints on how to improve it are welcome. Also look at the TODO for future changes. NOTE: Please send me improvements to this document. dstat-0.7.4/docs/examples.html000066400000000000000000000421171351755116500163130ustar00rootroot00000000000000 Dstat examples

I’ve written a few examples that make use of the Dstat classes.

The following examples currently exist:

read.py   - shows how to access dstat data
mstat.py  - small sub-second ministat tool

Please send other examples or tools that make use of Dstat classes or changes to extend the current infrastructure.

I’m not particularly happy with the current interface to Dstat, so any hints on how to improve it are welcome. Also look at the TODO for future changes.

Note
Please send me improvements to this document.

dstat-0.7.4/docs/performance.adoc000066400000000000000000000063651351755116500167450ustar00rootroot00000000000000= Dstat performance == Introduction Since Dstat is written in python, it is not optimized for performance. But that doesn't mean that Dstat performs bad, it performs quite good given its written in python and a lot of dedication went into profiling and optimizing Dstat and Dstat plugins. But when doing performance analysis, it is always important to verify that the monitoring tool is not interfering with the performance numbers. (eg. writing to disk, using cpu/memory/network, increasing load) == Compare with baseline Depending on the plugins being used and the load on the server itself the impact Dstat has on the system you are monitoring might be considerable. A lot of plugins are pretty fast (less than 0.1ms on an modest 1.2Ghz laptop), but some plugins may use up to 3ms or even up to 2% of your CPU. (eg. each top-plugin scans the process-list) Before performing any tests please verify for yourself what impact Dstat has on your test results and keep that in mind when analysing the results afterwards. Especially if you suspect Dstat to be influencing your results, do a baseline with and without the Dstat commandline. == Selection of plugins In case the impact is higher than expected, reduce the number of plugins and remove expensive plugins, or even better, look at the plugin you're using and send me optimizations. Newer python versions are also faster than older ones, and hardware is only becoming faster at a pace that these considerations may not hold anylonger. == Debugging and profiling Dstat If you need feedback about plugin performance, use the --debug option to profile different plugins. If you use -t together with --debug, you can see the time deviation on your system in relation to load/plugins. If you want to profile certain plugins, you can use the --profile option which provides you with detailed information of the function calls that are the most expensive. You can also run the dstat plugin (--dstat) to look what overhead (cputime) and response (latency) Dstat has during runtime, which can be very useful to compare with your baseline and the system in idle state. One common way to profile a single plugin is to use the following commandline: dstat -t --dstat --debug --profile dstat -t --dstat --top-cpu --debug --profile The default profiling infrastructure is quite expensive, so it is important that you first make a baseline including the profiling itself, then compare it against the same commandline including the plugin you want to profile. == Improving Dstat's footprint even more Another way to win a few CPU cycles is to pre-compile the Dstat plugins by running the compileall.py script that comes with python on your plugins directory. It can save about 10% in execution time. Remember that invisible plugins (that run out of your terminal window) do take up cycles because the information is still being collected and possibly written to CSV output. It should be possible to write plugins in C to improve the impact on the system, but I have no experience with writing python modules in C. Any feedback on this is welcomed. == Performance tuning The following documents may be useful to tune a system for performance * http://people.redhat.com/alikins/system_tuning.html[] NOTE: Please send me improvements to this document. dstat-0.7.4/docs/performance.html000066400000000000000000000515641351755116500170040ustar00rootroot00000000000000 Dstat performance

Introduction

Since Dstat is written in python, it is not optimized for performance. But that doesn’t mean that Dstat performs bad, it performs quite good given its written in python and a lot of dedication went into profiling and optimizing Dstat and Dstat plugins.

But when doing performance analysis, it is always important to verify that the monitoring tool is not interfering with the performance numbers. (eg. writing to disk, using cpu/memory/network, increasing load)

Compare with baseline

Depending on the plugins being used and the load on the server itself the impact Dstat has on the system you are monitoring might be considerable. A lot of plugins are pretty fast (less than 0.1ms on an modest 1.2Ghz laptop), but some plugins may use up to 3ms or even up to 2% of your CPU. (eg. each top-plugin scans the process-list)

Before performing any tests please verify for yourself what impact Dstat has on your test results and keep that in mind when analysing the results afterwards. Especially if you suspect Dstat to be influencing your results, do a baseline with and without the Dstat commandline.

Selection of plugins

In case the impact is higher than expected, reduce the number of plugins and remove expensive plugins, or even better, look at the plugin you’re using and send me optimizations.

Newer python versions are also faster than older ones, and hardware is only becoming faster at a pace that these considerations may not hold anylonger.

Debugging and profiling Dstat

If you need feedback about plugin performance, use the --debug option to profile different plugins. If you use -t together with --debug, you can see the time deviation on your system in relation to load/plugins.

If you want to profile certain plugins, you can use the --profile option which provides you with detailed information of the function calls that are the most expensive.

You can also run the dstat plugin (--dstat) to look what overhead (cputime) and response (latency) Dstat has during runtime, which can be very useful to compare with your baseline and the system in idle state.

One common way to profile a single plugin is to use the following commandline:

dstat -t --dstat --debug --profile
dstat -t --dstat --top-cpu --debug --profile

The default profiling infrastructure is quite expensive, so it is important that you first make a baseline including the profiling itself, then compare it against the same commandline including the plugin you want to profile.

Improving Dstat’s footprint even more

Another way to win a few CPU cycles is to pre-compile the Dstat plugins by running the compileall.py script that comes with python on your plugins directory. It can save about 10% in execution time.

Remember that invisible plugins (that run out of your terminal window) do take up cycles because the information is still being collected and possibly written to CSV output.

It should be possible to write plugins in C to improve the impact on the system, but I have no experience with writing python modules in C. Any feedback on this is welcomed.

Performance tuning

The following documents may be useful to tune a system for performance

Note
Please send me improvements to this document.

dstat-0.7.4/docs/screen.adoc000066400000000000000000000026041351755116500157130ustar00rootroot00000000000000= Configuring screen to display multiple dstat for different systems Here is an example of how I monitor 5 nodes in a cluster with a minimum of effort using screen: Put the following content in a file called screenrc-5nodes: ---- startup_message off defwrap off split split split split screen -t node01 1 ssh -t 172.17.0.211 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10' focus down screen -t node02 2 ssh -t 172.17.0.212 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10' focus down screen -t node03 3 ssh -t 172.17.0.213 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10' focus down screen -t node04 4 ssh -t 172.17.0.214 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10' focus down screen -t node05 5 ssh -t 172.17.0.215 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10' ---- Then set the environment variable to tell screen to use this config-file for the next screen. ---- SCREENRC='screenrc-5nodes' screen ---- If you want to get out of this screen and end all dstats, the easiest way is to kill first all regions and then end each dstat. You can do this by: ---- ctrl-a X ---- Do that 5 times, and then quit each dstat by pressing: ---- ctrl-c ---- 5 times. If you have other tips or hints, please send them to: NOTE: Please send me improvements to this document. dstat-0.7.4/docs/screen.html000066400000000000000000000445521351755116500157610ustar00rootroot00000000000000 Configuring screen to display multiple dstat for different systems

Here is an example of how I monitor 5 nodes in a cluster with a minimum of effort using screen:

Put the following content in a file called screenrc-5nodes:

startup_message off
defwrap off
split
split
split
split
screen -t node01 1 ssh -t 172.17.0.211 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10'
focus down
screen -t node02 2 ssh -t 172.17.0.212 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10'
focus down
screen -t node03 3 ssh -t 172.17.0.213 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10'
focus down
screen -t node04 4 ssh -t 172.17.0.214 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10'
focus down
screen -t node05 5 ssh -t 172.17.0.215 'dstat -cdnyp --tcp --udp -l -D lores,hires -N bond0,eth0,eth2,eth3 10'

Then set the environment variable to tell screen to use this config-file for the next screen.

SCREENRC='screenrc-5nodes' screen

If you want to get out of this screen and end all dstats, the easiest way is to kill first all regions and then end each dstat. You can do this by:

ctrl-a X

Do that 5 times, and then quit each dstat by pressing:

ctrl-c

5 times.

If you have other tips or hints, please send them to: <dag@wieers.com>

Note
Please send me improvements to this document.

dstat-0.7.4/dstat000077500000000000000000002755661351755116500137440ustar00rootroot00000000000000#!/usr/bin/env python ### This program is free software; you can redistribute it and/or ### modify it under the terms of the GNU General Public License ### as published by the Free Software Foundation; either version 2 ### of the License, or (at your option) any later version. ### ### This program is distributed in the hope that it will be useful, ### but WITHOUT ANY WARRANTY; without even the implied warranty of ### MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ### GNU General Public License for more details. ### ### You should have received a copy of the GNU General Public License ### along with this program; if not, write to the Free Software ### Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. ### Copyright 2004-2019 Dag Wieers from __future__ import absolute_import, division, generators, print_function __metaclass__ = type import collections import fnmatch import getopt import getpass import glob import linecache import os import re import resource import sched import six import sys import time VERSION = '0.8.0' theme = { 'default': '' } if sys.version_info < (2, 2): sys.exit('error: Python 2.2 or later required') pluginpath = [ os.path.expanduser('~/.dstat/'), # home + /.dstat/ os.path.abspath(os.path.dirname(sys.argv[0])) + '/plugins/', # binary path + /plugins/ '/usr/share/dstat/', '/usr/local/share/dstat/', ] class Options: def __init__(self, args): self.args = args self.bits = False self.blackonwhite = False self.count = -1 self.cpulist = None self.debug = 0 self.delay = 1 self.disklist = None self.full = False self.float = False self.integer = False self.intlist = None self.netlist = None self.swaplist = None self.color = None self.update = True self.header = True self.output = False self.pidfile = False self.profile = '' ### List of available plugins allplugins = listplugins() ### List of plugins to show self.plugins = [] ### Implicit if no terminal is used if not sys.stdout.isatty(): self.color = False self.header = False self.update = False ### Temporary hardcoded for my own project self.diskset = { 'local': ('sda', 'hd[a-d]'), 'lores': ('sd[b-k]', 'sd[v-z]', 'sda[a-e]'), 'hires': ('sd[l-u]', 'sda[f-o]'), } try: opts, args = getopt.getopt(args, 'acdfghilmno:prstTvyC:D:I:M:N:S:V', ['all', 'all-plugins', 'bits', 'bw', 'black-on-white', 'color', 'debug', 'filesystem', 'float', 'full', 'help', 'integer', 'list', 'mods', 'modules', 'nocolor', 'noheaders', 'noupdate', 'output=', 'pidfile=', 'profile', 'version', 'vmstat'] + allplugins) except getopt.error as exc: print('dstat: %s, try dstat -h for a list of all the options' % exc) sys.exit(1) for opt, arg in opts: if opt in ['-c']: self.plugins.append('cpu') elif opt in ['-C']: self.cpulist = arg.split(',') elif opt in ['-d']: self.plugins.append('disk') elif opt in ['-D']: self.disklist = arg.split(',') elif opt in ['--filesystem']: self.plugins.append('fs') elif opt in ['-g']: self.plugins.append('page') elif opt in ['-i']: self.plugins.append('int') elif opt in ['-I']: self.intlist = arg.split(',') elif opt in ['-l']: self.plugins.append('load') elif opt in ['-m']: self.plugins.append('mem') elif opt in ['-M', '--mods', '--modules']: print('WARNING: Option %s is deprecated, please use --%s instead' % (opt, ' --'.join(arg.split(','))), file=sys.stderr) self.plugins += arg.split(',') elif opt in ['-n']: self.plugins.append('net') elif opt in ['-N']: self.netlist = arg.split(',') elif opt in ['-p']: self.plugins.append('proc') elif opt in ['-r']: self.plugins.append('io') elif opt in ['-s']: self.plugins.append('swap') elif opt in ['-S']: self.swaplist = arg.split(',') elif opt in ['-t']: self.plugins.append('time') elif opt in ['-T']: self.plugins.append('epoch') elif opt in ['-y']: self.plugins.append('sys') elif opt in ['-a', '--all']: self.plugins += [ 'cpu', 'disk', 'net', 'page', 'sys' ] elif opt in ['-v', '--vmstat']: self.plugins += [ 'proc', 'mem', 'page', 'disk', 'sys', 'cpu' ] elif opt in ['-f', '--full']: self.full = True elif opt in ['--all-plugins']: ### Make list unique in a fancy fast way plugins = list({}.fromkeys(allplugins).keys()) plugins.sort() self.plugins += plugins elif opt in ['--bits']: self.bits = True elif opt in ['--bw', '--black-on-white', '--blackonwhite']: self.blackonwhite = True elif opt in ['--color']: self.color = True self.update = True elif opt in ['--debug']: self.debug = self.debug + 1 elif opt in ['--float']: self.float = True elif opt in ['--integer']: self.integer = True elif opt in ['--list']: showplugins() sys.exit(0) elif opt in ['--nocolor']: self.color = False elif opt in ['--noheaders']: self.header = False elif opt in ['--noupdate']: self.update = False elif opt in ['-o', '--output']: self.output = arg elif opt in ['--pidfile']: self.pidfile = arg elif opt in ['--profile']: self.profile = 'dstat_profile.log' elif opt in ['-h', '--help']: self.usage() self.help() sys.exit(0) elif opt in ['-V', '--version']: self.version() sys.exit(0) elif opt.startswith('--'): self.plugins.append(opt[2:]) else: print('dstat: option %s unknown to getopt, try dstat -h for a list of all the options' % opt) sys.exit(1) if self.float and self.integer: print('dstat: option --float and --integer are mutual exclusive, you can only force one') sys.exit(1) if not self.plugins: print('You did not select any stats, using -cdngy by default.') self.plugins = [ 'cpu', 'disk', 'net', 'page', 'sys' ] try: if len(args) > 0: self.delay = int(args[0]) if len(args) > 1: self.count = int(args[1]) except: print('dstat: incorrect argument, try dstat -h for the correct syntax') sys.exit(1) if self.delay <= 0: print('dstat: delay must be an integer, greater than zero') sys.exit(1) if self.debug: print('Plugins: %s' % self.plugins) def version(self): print('Dstat %s' % VERSION) print('Written by Dag Wieers ') print('Homepage at http://dag.wieers.com/home-made/dstat/') print() print('Platform %s/%s' % (os.name, sys.platform)) print('Kernel %s' % os.uname()[2]) print('Python %s' % sys.version) print() color = "" if not gettermcolor(): color = "no " print('Terminal type: %s (%scolor support)' % (os.getenv('TERM'), color)) rows, cols = gettermsize() print('Terminal size: %d lines, %d columns' % (rows, cols)) print() print('Processors: %d' % getcpunr()) print('Pagesize: %d' % resource.getpagesize()) print('Clock ticks per secs: %d' % os.sysconf('SC_CLK_TCK')) print() global op op = self showplugins() def usage(self): print('Usage: dstat [-afv] [options..] [delay [count]]') def help(self): print('''Versatile tool for generating system resource statistics) Dstat options: -c, --cpu enable cpu stats -C 0,3,total include cpu0, cpu3 and total -d, --disk enable disk stats -D total,hda include hda and total -g, --page enable page stats -i, --int enable interrupt stats -I 5,eth2 include int5 and interrupt used by eth2 -l, --load enable load stats -m, --mem enable memory stats -n, --net enable network stats -N eth1,total include eth1 and total -p, --proc enable process stats -r, --io enable io stats (I/O requests completed) -s, --swap enable swap stats -S swap1,total include swap1 and total -t, --time enable time/date output -T, --epoch enable time counter (seconds since epoch) -y, --sys enable system stats --aio enable aio stats --fs, --filesystem enable fs stats --ipc enable ipc stats --lock enable lock stats --raw enable raw stats --socket enable socket stats --tcp enable tcp stats --udp enable udp stats --unix enable unix stats --vm enable vm stats --vm-adv enable advanced vm stats --zones enable zoneinfo stats --list list all available plugins -- enable external plugin by name (see --list) -a, --all equals -cdngy (default) -f, --full automatically expand -C, -D, -I, -N and -S lists -v, --vmstat equals -pmgdsc -D total --bits force bits for values expressed in bytes --float force float values on screen --integer force integer values on screen --bw, --black-on-white change colors for white background terminal --color force colors --nocolor disable colors --noheaders disable repetitive headers --noupdate disable intermediate updates --output file write CSV output to file --profile show profiling statistics when exiting dstat delay is the delay in seconds between each update (default: 1) count is the number of updates to display before exiting (default: unlimited) ''') ### START STATS DEFINITIONS ### class dstat: vars = None name = None nick = None type = 'f' types = () width = 5 scale = 1024 scales = () cols = 0 struct = None # val = {} # set1 = {} # set2 = {} def prepare(self): if callable(self.discover): self.discover = self.discover() if callable(self.vars): self.vars = self.vars() if not self.vars: raise Exception('No counter objects to monitor') if callable(self.name): self.name = self.name() if callable(self.nick): self.nick = self.nick() if not self.nick: self.nick = self.vars self.val = {}; self.set1 = {}; self.set2 = {} if self.struct: ### Plugin API version 2 for name in self.vars + [ 'total', ]: self.val[name] = self.struct self.set1[name] = self.struct self.set2[name] = {} elif self.cols <= 0: ### Plugin API version 1 for name in self.vars: self.val[name] = self.set1[name] = self.set2[name] = 0 else: ### Plugin API version 1 for name in self.vars + [ 'total', ]: self.val[name] = list(range(self.cols)) self.set1[name] = list(range(self.cols)) self.set2[name] = list(range(self.cols)) for i in list(range(self.cols)): self.val[name][i] = self.set1[name][i] = self.set2[name][i] = 0 # print(self.val) def open(self, *filenames): "Open stat file descriptor" self.file = [] self.fd = [] for filename in filenames: try: fd = dopen(filename) if fd: self.file.append(filename) self.fd.append(fd) except: pass if not self.fd: raise Exception('Cannot open file %s' % filename) def readlines(self): "Return lines from any file descriptor" for fd in self.fd: fd.seek(0) for line in fd.readlines(): yield line ### Implemented linecache (for top-plugins) but slows down normal plugins # for fd in self.fd: # i = 1 # while True: # line = linecache.getline(fd.name, i); # if not line: break # yield line # i += 1 def splitline(self, sep=None): for fd in self.fd: fd.seek(0) return fd.read().split(sep) def splitlines(self, sep=None, replace=None): "Return split lines from any file descriptor" for fd in self.fd: fd.seek(0) for line in fd.readlines(): if replace and sep: yield line.replace(replace, sep).split(sep) elif replace: yield line.replace(replace, ' ').split() else: yield line.split(sep) # ### Implemented linecache (for top-plugins) but slows down normal plugins # for fd in self.fd: # if replace and sep: # yield line.replace(replace, sep).split(sep) # elif replace: # yield line.replace(replace, ' ').split() # else: # yield line.split(sep) # i += 1 def statwidth(self): "Return complete stat width" if self.cols: return len(self.vars) * self.colwidth() + len(self.vars) - 1 else: return len(self.nick) * self.colwidth() + len(self.nick) - 1 def colwidth(self): "Return column width" if isinstance(self.name, six.string_types): return self.width else: return len(self.nick) * self.width + len(self.nick) - 1 def title(self): ret = theme['title'] if isinstance(self.name, six.string_types): width = self.statwidth() return ret + self.name[0:width].center(width).replace(' ', '-') + theme['default'] for i, name in enumerate(self.name): width = self.colwidth() ret = ret + name[0:width].center(width).replace(' ', '-') if i + 1 != len(self.vars): if op.color: ret = ret + theme['frame'] + char['dash'] + theme['title'] else: ret = ret + char['space'] return ret def subtitle(self): ret = '' if isinstance(self.name, six.string_types): for i, nick in enumerate(self.nick): ret = ret + theme['subtitle'] + nick[0:self.width].center(self.width) + theme['default'] if i + 1 != len(self.nick): ret = ret + char['space'] return ret else: for i, name in enumerate(self.name): for j, nick in enumerate(self.nick): ret = ret + theme['subtitle'] + nick[0:self.width].center(self.width) + theme['default'] if j + 1 != len(self.nick): ret = ret + char['space'] if i + 1 != len(self.name): ret = ret + theme['frame'] + char['colon'] return ret def csvtitle(self): if isinstance(self.name, six.string_types): return '"' + self.name + '"' + char['sep'] * (len(self.nick) - 1) else: ret = '' for i, name in enumerate(self.name): ret = ret + '"' + name + '"' + char['sep'] * (len(self.nick) - 1) if i + 1 != len(self.name): ret = ret + char['sep'] return ret def csvsubtitle(self): ret = '' if isinstance(self.name, six.string_types): for i, nick in enumerate(self.nick): ret = ret + '"' + nick + '"' if i + 1 != len(self.nick): ret = ret + char['sep'] elif len(self.name) == 1: for i, name in enumerate(self.name): for j, nick in enumerate(self.nick): ret = ret + '"' + nick + '"' if j + 1 != len(self.nick): ret = ret + char['sep'] if i + 1 != len(self.name): ret = ret + char['sep'] else: for i, name in enumerate(self.name): for j, nick in enumerate(self.nick): ret = ret + '"' + name + ':' + nick + '"' if j + 1 != len(self.nick): ret = ret + char['sep'] if i + 1 != len(self.name): ret = ret + char['sep'] return ret def check(self): "Check if stat is applicable" # if hasattr(self, 'fd') and not self.fd: # raise Exception, 'File %s does not exist' % self.fd if not self.vars: raise Exception('No objects found, no stats available') if not self.discover: raise Exception('No objects discovered, no stats available') if self.colwidth(): return True raise Exception('Unknown problem, please report') def discover(self, *objlist): return True def show(self): "Display stat results" line = '' if hasattr(self, 'output'): return cprint(self.output, self.type, self.width, self.scale) for i, name in enumerate(self.vars): if i < len(self.types): ctype = self.types[i] else: ctype = self.type if i < len(self.scales): scale = self.scales[i] else: scale = self.scale if isinstance(self.val[name], collections.Sequence) and not isinstance(self.val[name], six.string_types): line = line + cprintlist(self.val[name], ctype, self.width, scale) sep = theme['frame'] + char['colon'] if i + 1 != len(self.vars): line = line + sep else: ### Make sure we don't show more values than we have nicknames if i >= len(self.nick): break line = line + cprint(self.val[name], ctype, self.width, scale) sep = char['space'] if i + 1 != len(self.nick): line = line + sep return line def showend(self, totlist, vislist): if vislist and self is not vislist[-1]: return theme['frame'] + char['pipe'] elif totlist != vislist: return theme['frame'] + char['gt'] return '' def showcsv(self): def printcsv(var): if var != round(var): return '%.3f' % var return '%d' % int(round(var)) line = '' for i, name in enumerate(self.vars): if isinstance(self.val[name], types.ListType) or isinstance(self.val[name], types.TupleType): for j, val in enumerate(self.val[name]): line = line + printcsv(val) if j + 1 != len(self.val[name]): line = line + char['sep'] elif isinstance(self.val[name], types.StringType): line = line + self.val[name] else: line = line + printcsv(self.val[name]) if i + 1 != len(self.vars): line = line + char['sep'] return line def showcsvend(self, totlist, vislist): if vislist and self is not vislist[-1]: return char['sep'] elif totlist and self is not totlist[-1]: return char['sep'] return '' class dstat_aio(dstat): def __init__(self): self.name = 'async' self.nick = ('#aio',) self.vars = ('aio',) self.type = 'd' self.width = 5; self.open('/proc/sys/fs/aio-nr') def extract(self): for l in self.splitlines(): if len(l) < 1: continue self.val['aio'] = int(l[0]) class dstat_cpu(dstat): def __init__(self): self.nick = ( 'usr', 'sys', 'idl', 'wai', 'stl' ) self.type = 'p' self.width = 3 self.scale = 34 self.open('/proc/stat') self.cols = 5 def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 9 or l[0][0:3] != 'cpu': continue ret.append(l[0][3:]) ret.sort() for item in objlist: ret.append(item) return ret def vars(self): ret = [] if op.cpulist and 'all' in op.cpulist: varlist = [] cpu = 0 while cpu < cpunr: varlist.append(str(cpu)) cpu = cpu + 1 # if len(varlist) > 2: varlist = varlist[0:2] elif op.cpulist: varlist = op.cpulist else: varlist = ('total',) for name in varlist: if name in self.discover + ['total']: ret.append(name) return ret def name(self): ret = [] for name in self.vars: if name == 'total': ret.append('total cpu usage') else: ret.append('cpu' + name + ' usage') return ret def extract(self): for l in self.splitlines(): if len(l) < 9: continue for name in self.vars: if l[0] == 'cpu' + name or ( l[0] == 'cpu' and name == 'total' ): self.set2[name] = ( int(l[1]) + int(l[2]) + int(l[6]) + int(l[7]), int(l[3]), int(l[4]), int(l[5]), int(l[8]) ) for name in self.vars: for i in list(range(self.cols)): if sum(self.set2[name]) > sum(self.set1[name]): self.val[name][i] = 100.0 * (self.set2[name][i] - self.set1[name][i]) / (sum(self.set2[name]) - sum(self.set1[name])) else: self.val[name][i] = 0 # print("Error: tick problem detected, this should never happen !", file=sys.stderr) if step == op.delay: self.set1.update(self.set2) class dstat_cpu_use(dstat_cpu): def __init__(self): self.name = 'per cpu usage' self.type = 'p' self.width = 3 self.scale = 34 self.open('/proc/stat') self.cols = 7 if not op.cpulist: self.vars = [ str(x) for x in list(range(cpunr)) ] def extract(self): for l in self.splitlines(): if len(l) < 9: continue for name in self.vars: if l[0] == 'cpu' + name or ( l[0] == 'cpu' and name == 'total' ): self.set2[name] = ( int(l[1]) + int(l[2]), int(l[3]), int(l[4]), int(l[5]), int(l[6]), int(l[7]), int(l[8]) ) for name in self.vars: if sum(self.set2[name]) > sum(self.set1[name]): self.val[name] = 100.0 - 100.0 * (self.set2[name][2] - self.set1[name][2]) / (sum(self.set2[name]) - sum(self.set1[name])) else: self.val[name] = 0 # print("Error: tick problem detected, this should never happen !", file=sys.stderr) if step == op.delay: self.set1.update(self.set2) class dstat_cpu_adv(dstat_cpu): def __init__(self): self.nick = ( 'usr', 'sys', 'idl', 'wai', 'hiq', 'siq', 'stl' ) self.type = 'p' self.width = 3 self.scale = 34 self.open('/proc/stat') self.cols = 7 def extract(self): for l in self.splitlines(): if len(l) < 9: continue for name in self.vars: if l[0] == 'cpu' + name or ( l[0] == 'cpu' and name == 'total' ): self.set2[name] = ( int(l[1]) + int(l[2]), int(l[3]), int(l[4]), int(l[5]), int(l[6]), int(l[7]), int(l[8]) ) for name in self.vars: for i in list(range(self.cols)): if sum(self.set2[name]) > sum(self.set1[name]): self.val[name][i] = 100.0 * (self.set2[name][i] - self.set1[name][i]) / (sum(self.set2[name]) - sum(self.set1[name])) else: self.val[name][i] = 0 # print("Error: tick problem detected, this should never happen !", file=sys.stderr) if step == op.delay: self.set1.update(self.set2) class dstat_cpu24(dstat): def __init__(self): self.nick = ( 'usr', 'sys', 'idl') self.type = 'p' self.width = 3 self.scale = 34 self.open('/proc/stat') self.cols = 3 def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) != 5 or l[0][0:3] != 'cpu': continue ret.append(l[0][3:]) ret.sort() for item in objlist: ret.append(item) return ret def vars(self): ret = [] if op.cpulist and 'all' in op.cpulist: varlist = [] cpu = 0 while cpu < cpunr: varlist.append(str(cpu)) cpu = cpu + 1 # if len(varlist) > 2: varlist = varlist[0:2] elif op.cpulist: varlist = op.cpulist else: varlist = ('total',) for name in varlist: if name in self.discover + ['total']: ret.append(name) return ret def name(self): ret = [] for name in self.vars: if name == 'total': ret.append('cpu usage') else: ret.append('cpu' + name) return ret def extract(self): for l in self.splitlines(): for name in self.vars: if l[0] == 'cpu' + name or ( l[0] == 'cpu' and name == 'total' ): self.set2[name] = ( int(l[1]) + int(l[2]), int(l[3]), int(l[4]) ) for name in self.vars: for i in list(range(self.cols)): self.val[name][i] = 100.0 * (self.set2[name][i] - self.set1[name][i]) / (sum(self.set2[name]) - sum(self.set1[name])) if step == op.delay: self.set1.update(self.set2) class dstat_disk(dstat): def __init__(self): self.nick = ('read', 'writ') self.type = 'b' self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/diskstats') self.cols = 2 def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue name = l[2] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception("No suitable block devices found to monitor") return ret def basename(self, disk): "Strip /dev/ and convert symbolic link" if disk[:5] == '/dev/': # file or symlink if os.path.exists(disk): # e.g. /dev/disk/by-uuid/15e40cc5-85de-40ea-b8fb-cb3a2eaf872 if os.path.islink(disk): target = os.readlink(disk) # convert relative pathname to absolute if target[0] != '/': target = os.path.join(os.path.dirname(disk), target) target = os.path.normpath(target) print('dstat: symlink %s -> %s' % (disk, target)) disk = target # trim leading /dev/ return disk[5:] else: print('dstat: %s does not exist' % disk) else: return disk def vars(self): ret = [] if op.disklist: varlist = list(map(self.basename, op.disklist)) elif not op.full: varlist = ('total',) else: varlist = [] for name in self.discover: if self.diskfilter.match(name): continue if name not in blockdevices(): continue varlist.append(name) # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover + ['total'] or name in op.diskset: ret.append(name) return ret def name(self): return ['dsk/'+sysfs_dev(name) for name in self.vars] def extract(self): for name in self.vars: self.set2[name] = (0, 0) for l in self.splitlines(): if len(l) < 13: continue if l[5] == '0' and l[9] == '0': continue name = l[2] if l[3:] == ['0',] * 11: continue if not self.diskfilter.match(name): self.set2['total'] = ( self.set2['total'][0] + int(l[5]), self.set2['total'][1] + int(l[9]) ) if name in self.vars and name != 'total': self.set2[name] = ( self.set2[name][0] + int(l[5]), self.set2[name][1] + int(l[9]) ) for diskset in self.vars: if diskset in op.diskset: for disk in op.diskset[diskset]: if fnmatch.fnmatch(name, disk): self.set2[diskset] = ( self.set2[diskset][0] + int(l[5]), self.set2[diskset][1] + int(l[9]) ) for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) * 512.0 / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) class dstat_disk24(dstat): def __init__(self): self.nick = ('read', 'writ') self.type = 'b' self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/partitions') if self.fd and not self.discover: raise Exception('Kernel has no per-partition I/O accounting [CONFIG_BLK_STATS], use at least 2.4.20') self.cols = 2 def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 15 or l[0] == 'major' or int(l[1]) % 16 != 0: continue name = l[3] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception("No suitable block devices found to monitor") return ret def basename(self, disk): "Strip /dev/ and convert symbolic link" if disk[:5] == '/dev/': # file or symlink if os.path.exists(disk): # e.g. /dev/disk/by-uuid/15e40cc5-85de-40ea-b8fb-cb3a2eaf872 if os.path.islink(disk): target = os.readlink(disk) # convert relative pathname to absolute if target[0] != '/': target = os.path.join(os.path.dirname(disk), target) target = os.path.normpath(target) print('dstat: symlink %s -> %s' % (disk, target)) disk = target # trim leading /dev/ return disk[5:] else: print('dstat: %s does not exist' % disk) else: return disk def vars(self): ret = [] if op.disklist: varlist = list(map(self.basename, op.disklist)) elif not op.full: varlist = ('total',) else: varlist = [] for name in self.discover: if self.diskfilter.match(name): continue varlist.append(name) # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover + ['total'] or name in op.diskset: ret.append(name) return ret def name(self): return ['dsk/'+sysfs_dev(name) for name in self.vars] def extract(self): for name in self.vars: self.set2[name] = (0, 0) for l in self.splitlines(): if len(l) < 15 or l[0] == 'major' or int(l[1]) % 16 != 0: continue name = l[3] if not self.diskfilter.match(name): self.set2['total'] = ( self.set2['total'][0] + int(l[6]), self.set2['total'][1] + int(l[10]) ) if name in self.vars: self.set2[name] = ( self.set2[name][0] + int(l[6]), self.set2[name][1] + int(l[10]) ) for diskset in self.vars: if diskset in op.diskset: for disk in op.diskset[diskset]: if fnmatch.fnmatch(name, disk): self.set2[diskset] = ( self.set2[diskset][0] + int(l[6]), self.set2[diskset][1] + int(l[10]) ) for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) * 512.0 / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) ### FIXME: Needs rework, does anyone care ? class dstat_disk24_old(dstat): def __init__(self): self.nick = ('read', 'writ') self.type = 'b' self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.regexp = re.compile('^\((\d+),(\d+)\):\(\d+,\d+,(\d+),\d+,(\d+)\)$') self.open('/proc/stat') self.cols = 2 def discover(self, *objlist): ret = [] for l in self.splitlines(':'): if len(l) < 3: continue name = l[0] if name != 'disk_io': continue for pair in line.split()[1:]: m = self.regexp.match(pair) if not m: continue l = m.groups() if len(l) < 4: continue name = dev(int(l[0]), int(l[1])) ret.append(name) break for item in objlist: ret.append(item) if not ret: raise Exception("No suitable block devices found to monitor") return ret def vars(self): ret = [] if op.disklist: varlist = op.disklist elif not op.full: varlist = ('total',) else: varlist = [] for name in self.discover: if self.diskfilter.match(name): continue varlist.append(name) # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover + ['total'] or name in op.diskset: ret.append(name) return ret def name(self): return ['dsk/'+name for name in self.vars] def extract(self): for name in self.vars: self.set2[name] = (0, 0) for line in self.splitlines(':'): if len(l) < 3: continue name = l[0] if name != 'disk_io': continue for pair in line.split()[1:]: m = self.regexp.match(pair) if not m: continue l = m.groups() if len(l) < 4: continue name = dev(int(l[0]), int(l[1])) if not self.diskfilter.match(name): self.set2['total'] = ( self.set2['total'][0] + int(l[2]), self.set2['total'][1] + int(l[3]) ) if name in self.vars and name != 'total': self.set2[name] = ( self.set2[name][0] + int(l[2]), self.set2[name][1] + int(l[3]) ) for diskset in self.vars: if diskset in op.diskset: for disk in op.diskset[diskset]: if fnmatch.fnmatch(name, disk): self.set2[diskset] = ( self.set2[diskset][0] + int(l[2]), self.set2[diskset][1] + int(l[3]) ) break for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) * 512.0 / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) class dstat_epoch(dstat): def __init__(self): self.name = 'epoch' self.vars = ('epoch',) self.width = 10 if op.debug: self.width = 13 self.scale = 0 ### We are now using the starttime instead of the execution time of this plugin def extract(self): # self.val['epoch'] = time.time() self.val['epoch'] = starttime class dstat_fs(dstat): def __init__(self): self.name = 'filesystem' self.vars = ('files', 'inodes') self.type = 'd' self.width = 6 self.scale = 1000 def extract(self): for line in dopen('/proc/sys/fs/file-nr'): l = line.split() if len(l) < 1: continue self.val['files'] = int(l[0]) for line in dopen('/proc/sys/fs/inode-nr'): l = line.split() if len(l) < 2: continue self.val['inodes'] = int(l[0]) - int(l[1]) class dstat_int(dstat): def __init__(self): self.name = 'interrupts' self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/stat') self.intmap = self.intmap() def intmap(self): ret = {} for line in dopen('/proc/interrupts'): l = line.split() if len(l) <= cpunr: continue l1 = l[0].split(':')[0] l2 = ' '.join(l[cpunr+2:]).split(',') ret[l1] = l1 for name in l2: ret[name.strip().lower()] = l1 return ret def discover(self, *objlist): ret = [] for l in self.splitlines(): if l[0] != 'intr': continue for name, i in enumerate(l[2:]): if int(i) > 10: ret.append(str(name)) return ret # def check(self): # if self.fd[0] and self.vars: # self.fd[0].seek(0) # for l in self.fd[0].splitlines(): # if l[0] != 'intr': continue # return True # return False def vars(self): ret = [] if op.intlist: varlist = op.intlist else: varlist = self.discover for name in varlist: if name in ('0', '1', '2', '8', 'NMI', 'LOC', 'MIS', 'CPU0'): varlist.remove(name) if not op.full and len(varlist) > 3: varlist = varlist[-3:] for name in varlist: if name in self.discover + ['total',]: ret.append(name) elif name.lower() in self.intmap: ret.append(self.intmap[name.lower()]) return ret def extract(self): for l in self.splitlines(): if not l or l[0] != 'intr': continue for name in self.vars: if name != 'total': self.set2[name] = int(l[int(name) + 2]) self.set2['total'] = int(l[1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) class dstat_int24(dstat): def __init__(self): self.name = 'interrupts' self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/interrupts') def intmap(self): ret = {} for l in self.splitlines(): if len(l) <= cpunr: continue l1 = l[0].split(':')[0] l2 = ' '.join(l[cpunr+2:]).split(',') ret[l1] = l1 for name in l2: ret[name.strip().lower()] = l1 return ret def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < cpunr+1: continue name = l[0].split(':')[0] if int(l[1]) > 10: ret.append(name) return ret # def check(self): # if self.fd and self.discover: # self.fd[0].seek(0) # for l in self.fd[0].splitlines(): # if l[0] != 'intr' or len(l) > 2: continue # return True # return False def vars(self): ret = [] if op.intlist: varlist = op.intlist else: varlist = self.discover for name in varlist: if name in ('0', '1', '2', '8', 'CPU0', 'ERR', 'LOC', 'MIS', 'NMI'): varlist.remove(name) if not op.full and len(varlist) > 3: varlist = varlist[-3:] for name in varlist: if name in self.discover: ret.append(name) elif name.lower() in self.intmap: ret.append(self.intmap[name.lower()]) return ret def extract(self): for l in self.splitlines(): if len(l) < cpunr+1: continue name = l[0].split(':')[0] if name in self.vars: self.set2[name] = 0 for i in l[1:1+cpunr]: self.set2[name] = self.set2[name] + int(i) # elif len(l) > 2 + cpunr: # for hw in self.vars: # for mod in l[2+cpunr:]: # self.set2[mod] = int(l[1]) for name in self.set2: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) class dstat_io(dstat): def __init__(self): self.nick = ('read', 'writ') self.type = 'f' self.width = 5 self.scale = 1000 self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/diskstats') self.cols = 2 def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue name = l[2] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception("No suitable block devices found to monitor") return ret def vars(self): ret = [] if op.disklist: varlist = op.disklist elif not op.full: varlist = ('total',) else: varlist = [] for name in self.discover: if self.diskfilter.match(name): continue if name not in blockdevices(): continue varlist.append(name) # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover + ['total'] or name in op.diskset: ret.append(name) return ret def name(self): return ['io/'+name for name in self.vars] def extract(self): for name in self.vars: self.set2[name] = (0, 0) for l in self.splitlines(): if len(l) < 13: continue if l[3] == '0' and l[7] == '0': continue name = l[2] if l[3:] == ['0',] * 11: continue if not self.diskfilter.match(name): self.set2['total'] = ( self.set2['total'][0] + int(l[3]), self.set2['total'][1] + int(l[7]) ) if name in self.vars and name != 'total': self.set2[name] = ( self.set2[name][0] + int(l[3]), self.set2[name][1] + int(l[7]) ) for diskset in self.vars: if diskset in op.diskset: for disk in op.diskset[diskset]: if fnmatch.fnmatch(name, disk): self.set2[diskset] = ( self.set2[diskset][0] + int(l[3]), self.set2[diskset][1] + int(l[7]) ) for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) * 1.0 / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) class dstat_ipc(dstat): def __init__(self): self.name = 'sysv ipc' self.vars = ('msg', 'sem', 'shm') self.type = 'd' self.width = 3 self.scale = 10 def extract(self): for name in self.vars: self.val[name] = len(dopen('/proc/sysvipc/'+name).readlines()) - 1 class dstat_load(dstat): def __init__(self): self.name = 'load avg' self.nick = ('1m', '5m', '15m') self.vars = ('load1', 'load5', 'load15') self.type = 'f' self.width = 4 self.scale = 0.5 self.open('/proc/loadavg') def extract(self): for l in self.splitlines(): if len(l) < 3: continue self.val['load1'] = float(l[0]) self.val['load5'] = float(l[1]) self.val['load15'] = float(l[2]) class dstat_lock(dstat): def __init__(self): self.name = 'file locks' self.nick = ('pos', 'lck', 'rea', 'wri') self.vars = ('posix', 'flock', 'read', 'write') self.type = 'f' self.width = 3 self.scale = 10 self.open('/proc/locks') def extract(self): for name in self.vars: self.val[name] = 0 for l in self.splitlines(): if len(l) < 4: continue if l[1] == 'POSIX': self.val['posix'] += 1 elif l[1] == 'FLOCK': self.val['flock'] += 1 if l[3] == 'READ': self.val['read'] += 1 elif l[3] == 'WRITE': self.val['write'] += 1 class dstat_mem(dstat): def __init__(self): self.name = 'memory usage' self.nick = ('used', 'free', 'buff', 'cach') self.vars = ('MemUsed', 'MemFree', 'Buffers', 'Cached') self.open('/proc/meminfo') def extract(self): adv_extract_vars = ('MemTotal', 'Shmem', 'SReclaimable') adv_val={} for l in self.splitlines(): if len(l) < 2: continue name = l[0].split(':')[0] if name in self.vars: self.val[name] = int(l[1]) * 1024.0 if name in adv_extract_vars: adv_val[name] = int(l[1]) * 1024.0 self.val['MemUsed'] = adv_val['MemTotal'] - self.val['MemFree'] - self.val['Buffers'] - self.val['Cached'] - adv_val['SReclaimable'] + adv_val['Shmem'] class dstat_mem_adv(dstat_mem): """ Advanced memory usage Displays memory usage similarly to the internal plugin but with added total, shared and reclaimable counters. Additionally, shared memory is added and reclaimable memory is subtracted from the used memory counter. """ def __init__(self): self.name = 'advanced memory usage' self.nick = ('total', 'used', 'free', 'buff', 'cach', 'dirty', 'shmem', 'recl') self.vars = ('MemTotal', 'MemUsed', 'MemFree', 'Buffers', 'Cached', 'Dirty', 'Shmem', 'SReclaimable') self.open('/proc/meminfo') class dstat_net(dstat): def __init__(self): self.nick = ('recv', 'send') self.type = 'b' self.totalfilter = re.compile('^(lo|bond\d+|face|.+\.\d+)$') self.open('/proc/net/dev') self.cols = 2 def discover(self, *objlist): ret = [] for l in self.splitlines(replace=':'): if len(l) < 17: continue if l[2] == '0' and l[10] == '0': continue name = l[0] if name not in ('lo', 'face'): ret.append(name) ret.sort() for item in objlist: ret.append(item) return ret def vars(self): ret = [] if op.netlist: varlist = op.netlist elif not op.full: varlist = ('total',) else: varlist = self.discover # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover + ['total', 'lo']: ret.append(name) if not ret: raise Exception("No suitable network interfaces found to monitor") return ret def name(self): return ['net/'+name for name in self.vars] def extract(self): self.set2['total'] = [0, 0] for l in self.splitlines(replace=':'): if len(l) < 17: continue if l[2] == '0' and l[10] == '0': continue name = l[0] if name in self.vars : self.set2[name] = ( int(l[1]), int(l[9]) ) if not self.totalfilter.match(name): self.set2['total'] = ( self.set2['total'][0] + int(l[1]), self.set2['total'][1] + int(l[9])) if update: for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) * 1.0 / elapsed, self.set1[name], self.set2[name])) if self.val[name][0] < 0: self.val[name][0] += maxint + 1 if self.val[name][1] < 0: self.val[name][1] += maxint + 1 if step == op.delay: self.set1.update(self.set2) class dstat_page(dstat): def __init__(self): self.name = 'paging' self.nick = ('in', 'out') self.vars = ('pswpin', 'pswpout') self.type = 'd' self.open('/proc/vmstat') def extract(self): for l in self.splitlines(): if len(l) < 2: continue name = l[0] if name in self.vars: self.set2[name] = int(l[1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * pagesize * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) class dstat_page24(dstat): def __init__(self): self.name = 'paging' self.nick = ('in', 'out') self.vars = ('pswpin', 'pswpout') self.type = 'd' self.open('/proc/stat') def extract(self): for l in self.splitlines(): if len(l) < 3: continue name = l[0] if name != 'swap': continue self.set2['pswpin'] = int(l[1]) self.set2['pswpout'] = int(l[2]) break for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * pagesize * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) class dstat_proc(dstat): def __init__(self): self.name = 'procs' self.nick = ('run', 'blk', 'new') self.vars = ('procs_running', 'procs_blocked', 'processes') self.type = 'f' self.width = 3 self.scale = 10 self.open('/proc/stat') def extract(self): for l in self.splitlines(): if len(l) < 2: continue name = l[0] if name == 'processes': self.val['processes'] = 0 self.set2[name] = int(l[1]) elif name == 'procs_running': self.set2[name] = self.set2[name] + int(l[1]) - 1 elif name == 'procs_blocked': self.set2[name] = self.set2[name] + int(l[1]) self.val['processes'] = (self.set2['processes'] - self.set1['processes']) * 1.0 / elapsed for name in ('procs_running', 'procs_blocked'): self.val[name] = self.set2[name] * 1.0 if step == op.delay: self.set1.update(self.set2) for name in ('procs_running', 'procs_blocked'): self.set2[name] = 0 class dstat_raw(dstat): def __init__(self): self.name = 'raw' self.nick = ('raw',) self.vars = ('sockets',) self.type = 'd' self.width = 4 self.scale = 1000 self.open('/proc/net/raw') def extract(self): lines = -1 for line in self.readlines(): lines += 1 self.val['sockets'] = lines ### Cannot use len() on generator # self.val['sockets'] = len(self.readlines()) - 1 class dstat_socket(dstat): def __init__(self): self.name = 'sockets' self.type = 'd' self.width = 4 self.scale = 1000 self.open('/proc/net/sockstat') self.nick = ('tot', 'tcp', 'udp', 'raw', 'frg') self.vars = ('sockets:', 'TCP:', 'UDP:', 'RAW:', 'FRAG:') def extract(self): for l in self.splitlines(): if len(l) < 3: continue self.val[l[0]] = int(l[2]) self.val['other'] = self.val['sockets:'] - self.val['TCP:'] - self.val['UDP:'] - self.val['RAW:'] - self.val['FRAG:'] class dstat_swap(dstat): def __init__(self): self.name = 'swap' self.nick = ('used', 'free') self.type = 'd' self.open('/proc/swaps') def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 5: continue if l[0] == 'Filename': continue try: int(l[2]) int(l[3]) except: continue # ret.append(improve(l[0])) ret.append(l[0]) ret.sort() for item in objlist: ret.append(item) return ret def vars(self): ret = [] if op.swaplist: varlist = op.swaplist elif not op.full: varlist = ('total',) else: varlist = self.discover # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover + ['total']: ret.append(name) if not ret: raise Exception("No suitable swap devices found to monitor") return ret def name(self): return ['swp/'+improve(name) for name in self.vars] def extract(self): self.val['total'] = [0, 0] for l in self.splitlines(): if len(l) < 5 or l[0] == 'Filename': continue name = l[0] self.val[name] = ( int(l[3]) * 1024.0, (int(l[2]) - int(l[3])) * 1024.0 ) self.val['total'] = ( self.val['total'][0] + self.val[name][0], self.val['total'][1] + self.val[name][1]) class dstat_swap_old(dstat): def __init__(self): self.name = 'swap' self.nick = ('used', 'free') self.vars = ('SwapUsed', 'SwapFree') self.type = 'd' self.open('/proc/meminfo') def extract(self): for l in self.splitlines(): if len(l) < 2: continue name = l[0].split(':')[0] if name in self.vars + ('SwapTotal',): self.val[name] = int(l[1]) * 1024.0 self.val['SwapUsed'] = self.val['SwapTotal'] - self.val['SwapFree'] class dstat_sys(dstat): def __init__(self): self.name = 'system' self.nick = ('int', 'csw') self.vars = ('intr', 'ctxt') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/stat') def extract(self): for l in self.splitlines(): if len(l) < 2: continue name = l[0] if name in self.vars: self.set2[name] = int(l[1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) class dstat_tcp(dstat): def __init__(self): self.name = 'tcp sockets' self.nick = ('lis', 'act', 'syn', 'tim', 'clo') self.vars = ('listen', 'established', 'syn', 'wait', 'close') self.type = 'd' self.width = 4 self.scale = 1000 self.open('/proc/net/tcp', '/proc/net/tcp6') def extract(self): for name in self.vars: self.val[name] = 0 for l in self.splitlines(): if len(l) < 12: continue ### 01: established, 02: syn_sent, 03: syn_recv, 04: fin_wait1, ### 05: fin_wait2, 06: time_wait, 07: close, 08: close_wait, ### 09: last_ack, 0A: listen, 0B: closing if l[3] in ('0A',): self.val['listen'] += 1 elif l[3] in ('01',): self.val['established'] += 1 elif l[3] in ('02', '03', '09',): self.val['syn'] += 1 elif l[3] in ('06',): self.val['wait'] += 1 elif l[3] in ('04', '05', '07', '08', '0B',): self.val['close'] += 1 class dstat_time(dstat): def __init__(self): self.name = 'system' self.timefmt = os.getenv('DSTAT_TIMEFMT') or '%d-%m %H:%M:%S' self.type = 's' if op.debug: self.width = len(time.strftime(self.timefmt, time.localtime())) + 4 else: self.width = len(time.strftime(self.timefmt, time.localtime())) self.scale = 0 self.vars = ('time',) ### We are now using the starttime for this plugin, not the execution time of this plugin def extract(self): if op.debug: self.val['time'] = time.strftime(self.timefmt, time.localtime(starttime)) + ".%03d" % (round(starttime * 1000 % 1000 )) else: self.val['time'] = time.strftime(self.timefmt, time.localtime(starttime)) class dstat_udp(dstat): def __init__(self): self.name = 'udp' self.nick = ('lis', 'act') self.vars = ('listen', 'established') self.type = 'd' self.width = 4 self.scale = 1000 self.open('/proc/net/udp', '/proc/net/udp6') def extract(self): for name in self.vars: self.val[name] = 0 for l in self.splitlines(): if l[3] == '07': self.val['listen'] += 1 elif l[3] == '01': self.val['established'] += 1 class dstat_unix(dstat): def __init__(self): self.name = 'unix sockets' self.nick = ('dgm', 'str', 'lis', 'act') self.vars = ('datagram', 'stream', 'listen', 'established') self.type = 'd' self.width = 4 self.scale = 1000 self.open('/proc/net/unix') def extract(self): for name in self.vars: self.val[name] = 0 for l in self.splitlines(): if l[4] == '0002': self.val['datagram'] += 1 elif l[4] == '0001': self.val['stream'] += 1 if l[5] == '01': self.val['listen'] += 1 elif l[5] == '03': self.val['established'] += 1 class dstat_vm(dstat): def __init__(self): self.name = 'virtual memory' self.nick = ('majpf', 'minpf', 'alloc', 'free') ### Page allocations should include all page zones, not just ZONE_NORMAL, ### but also ZONE_DMA, ZONE_HIGHMEM, ZONE_DMA32 (depending on architecture) self.vars = ('pgmajfault', 'pgfault', 'pgalloc_*', 'pgfree') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/vmstat') def extract(self): for name in self.vars: self.set2[name] = 0 for l in self.splitlines(): if len(l) < 2: continue for name in self.vars: if fnmatch.fnmatch(l[0], name): self.set2[name] += int(l[1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) class dstat_vm_adv(dstat_vm): def __init__(self): self.name = 'advanced virtual memory' self.nick = ('steal', 'scanK', 'scanD', 'pgoru', 'astll') self.vars = ('pgsteal_*', 'pgscan_kswapd_*', 'pgscan_direct_*', 'pageoutrun', 'allocstall') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/vmstat') class dstat_zones(dstat): def __init__(self): self.name = 'zones memory' # self.nick = ('dmaF', 'dmaH', 'd32F', 'd32H', 'movaF', 'movaH') # self.vars = ('DMA_free', 'DMA_high', 'DMA32_free', 'DMA32_high', 'Movable_free', 'Movable_high') self.nick = ('d32F', 'd32H', 'normF', 'normH') self.vars = ('DMA32_free', 'DMA32_high', 'Normal_free', 'Normal_high') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/zoneinfo') ### Page allocations should include all page zones, not just ZONE_NORMAL, ### but also ZONE_DMA, ZONE_HIGHMEM, ZONE_DMA32 (depending on architecture) def extract(self): for l in self.splitlines(): if len(l) < 2: continue if l[0].startswith('Node'): zone = l[3] found_high = 0 if l[0].startswith('pages'): self.val[zone+'_free'] = int(l[2]) if l[0].startswith('high') and not found_high: self.val[zone+'_high'] = int(l[1]) found_high = 1 ### END STATS DEFINITIONS ### color = { 'black': '\033[0;30m', 'darkred': '\033[0;31m', 'darkgreen': '\033[0;32m', 'darkyellow': '\033[0;33m', 'darkblue': '\033[0;34m', 'darkmagenta': '\033[0;35m', 'darkcyan': '\033[0;36m', 'gray': '\033[0;37m', 'darkgray': '\033[1;30m', 'red': '\033[1;31m', 'green': '\033[1;32m', 'yellow': '\033[1;33m', 'blue': '\033[1;34m', 'magenta': '\033[1;35m', 'cyan': '\033[1;36m', 'white': '\033[1;37m', 'blackbg': '\033[40m', 'redbg': '\033[41m', 'greenbg': '\033[42m', 'yellowbg': '\033[43m', 'bluebg': '\033[44m', 'magentabg': '\033[45m', 'cyanbg': '\033[46m', 'whitebg': '\033[47m', } ansi = { 'reset': '\033[0;0m', 'bold': '\033[1m', 'reverse': '\033[2m', 'underline': '\033[4m', 'clear': '\033[2J', # 'clearline': '\033[K', 'clearline': '\033[2K', 'save': '\033[s', 'restore': '\033[u', 'save_all': '\0337', 'restore_all': '\0338', 'linewrap': '\033[7h', 'nolinewrap': '\033[7l', 'up': '\033[1A', 'down': '\033[1B', 'right': '\033[1C', 'left': '\033[1D', 'default': '\033[0;0m', } char = { 'pipe': '|', 'colon': ':', 'gt': '>', 'space': ' ', 'dash': '-', 'plus': '+', 'underscore': '_', 'sep': ',', } def set_theme(): "Provide a set of colors to use" if op.blackonwhite: theme = { 'title': color['darkblue'], 'subtitle': color['darkcyan'] + ansi['underline'], 'frame': color['darkblue'], 'default': ansi['default'], 'error': color['white'] + color['redbg'], 'roundtrip': color['darkblue'], 'debug': color['darkred'], 'input': color['darkgray'], 'done_lo': color['black'], 'done_hi': color['darkgray'], 'text_lo': color['black'], 'text_hi': color['darkgray'], 'unit_lo': color['black'], 'unit_hi': color['darkgray'], 'colors_lo': (color['darkred'], color['darkmagenta'], color['darkgreen'], color['darkblue'], color['darkcyan'], color['black'], color['red'], color['green']), 'colors_hi': (color['red'], color['magenta'], color['green'], color['blue'], color['cyan'], color['darkgray'], color['darkred'], color['darkgreen']), } else: theme = { 'title': color['darkblue'], 'subtitle': color['blue'] + ansi['underline'], 'frame': color['darkblue'], 'default': ansi['default'], 'error': color['white'] + color['redbg'], 'roundtrip': color['darkblue'], 'debug': color['darkred'], 'input': color['darkgray'], 'done_lo': color['white'], 'done_hi': color['gray'], 'text_lo': color['gray'], 'text_hi': color['darkgray'], 'unit_lo': color['darkgray'], 'unit_hi': color['darkgray'], 'colors_lo': (color['red'], color['yellow'], color['green'], color['blue'], color['cyan'], color['white'], color['darkred'], color['darkgreen']), 'colors_hi': (color['darkred'], color['darkyellow'], color['darkgreen'], color['darkblue'], color['darkcyan'], color['gray'], color['red'], color['green']), } return theme def ticks(): "Return the number of 'ticks' since bootup" try: for line in open('/proc/uptime', 'r').readlines(): l = line.split() if len(l) < 2: continue return float(l[0]) except: for line in dopen('/proc/stat').readlines(): l = line.split() if len(l) < 2: continue if l[0] == 'btime': return time.time() - int(l[1]) def improve(devname): "Improve a device name" if devname.startswith('/dev/mapper/'): devname = devname.split('/')[3] elif devname.startswith('/dev/'): devname = devname.split('/')[2] return devname def dopen(filename): "Open a file for reuse, if already opened, return file descriptor" global fds if not os.path.exists(filename): raise Exception('File %s does not exist' % filename) if 'fds' not in globals(): fds = {} if filename in fds: fds[filename].seek(0) else: fds[filename] = open(filename, 'r') return fds[filename] def dclose(filename): "Close an open file and remove file descriptor from list" global fds if not 'fds' in globals(): fds = {} if filename in fds: fds[filename].close() del(fds[filename]) def dpopen(cmd): "Open a pipe for reuse, if already opened, return pipes" global pipes, select import select if 'pipes' not in globals(): pipes = {} if cmd not in pipes: try: import asubprocess p = subprocess.Popen(cmd, shell=True, bufsize=0, close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) pipes[cmd] = (p.stdin, p.stdout, p.stderr) except ImportError: pipes[cmd] = os.popen3(cmd, 't', 0) return pipes[cmd] def readpipe(fileobj, tmout = 0.001): "Read available data from pipe in a non-blocking fashion" ret = '' while not select.select([fileobj.fileno()], [], [], tmout)[0]: pass while select.select([fileobj.fileno()], [], [], tmout)[0]: ret = ret + fileobj.read(1) return ret.split('\n') def greppipe(fileobj, str, tmout = 0.001): "Grep available data from pipe in a non-blocking fashion" ret = '' while not select.select([fileobj.fileno()], [], [], tmout)[0]: pass while select.select([fileobj.fileno()], [], [], tmout)[0]: character = fileobj.read(1) if character != '\n': ret = ret + character elif ret.startswith(str): return ret else: ret = '' if op.debug: raise Exception('Nothing found during greppipe data collection') return None def matchpipe(fileobj, string, tmout = 0.001): "Match available data from pipe in a non-blocking fashion" ret = '' regexp = re.compile(string) while not select.select([fileobj.fileno()], [], [], tmout)[0]: pass while select.select([fileobj.fileno()], [], [], tmout)[0]: character = fileobj.read(1) if character != '\n': ret = ret + character elif regexp.match(ret): return ret else: ret = '' if op.debug: raise Exception('Nothing found during matchpipe data collection') return None def cmd_test(cmd): pipes = os.popen3(cmd, 't', 0) for line in pipes[2].readlines(): raise Exception(line.strip()) def cmd_readlines(cmd): pipes = os.popen3(cmd, 't', 0) for line in pipes[1].readlines(): yield line def cmd_splitlines(cmd, sep=None): pipes = os.popen3(cmd, 't', 0) for line in pipes[1].readlines(): yield line.split(sep) def proc_readlines(filename): "Return the lines of a file, one by one" # for line in open(filename).readlines(): # yield line ### Implemented linecache (for top-plugins) i = 1 while True: line = linecache.getline(filename, i); if not line: break yield line i += 1 def proc_splitlines(filename, sep=None): "Return the splitted lines of a file, one by one" # for line in open(filename).readlines(): # yield line.split(sep) ### Implemented linecache (for top-plugins) i = 1 while True: line = linecache.getline(filename, i); if not line: break yield line.split(sep) i += 1 def proc_readline(filename): "Return the first line of a file" # return open(filename).read() return linecache.getline(filename, 1) def proc_splitline(filename, sep=None): "Return the first line of a file splitted" # return open(filename).read().split(sep) return linecache.getline(filename, 1).split(sep) ### FIXME: Should we cache this within every step ? def proc_pidlist(): "Return a list of process IDs" dstat_pid = str(os.getpid()) for pid in os.listdir('/proc/'): try: ### Is it a pid ? int(pid) ### Filter out dstat if pid == dstat_pid: continue yield pid except ValueError: continue def snmpget(server, community, oid): errorIndication, errorStatus, errorIndex, varBinds = cmdgen.CommandGenerator().getCmd( cmdgen.CommunityData('test-agent', community, 0), cmdgen.UdpTransportTarget((server, 161)), oid ) # print('%s -> ind: %s, stat: %s, idx: %s' % (oid, errorIndication, errorStatus, errorIndex)) for x in varBinds: return str(x[1]) def snmpwalk(server, community, oid): ret = [] errorIndication, errorStatus, errorIndex, varBindTable = cmdgen.CommandGenerator().nextCmd( cmdgen.CommunityData('test-agent', community, 0), cmdgen.UdpTransportTarget((server, 161)), oid ) # print('%s -> ind: %s, stat: %s, idx: %s' % (oid, errorIndication, errorStatus, errorIndex)) for x in varBindTable: for y in x: ret.append(str(y[1])) return ret def dchg(var, width, base): "Convert decimal to string given base and length" c = 0 while True: ret = str(int(round(var))) if len(ret) <= width: break var = var / base c = c + 1 else: c = -1 return ret, c def fchg(var, width, base): "Convert float to string given scale and length" c = 0 while True: if var == 0: ret = str('0') break # ret = repr(round(var)) # ret = repr(int(round(var, maxlen))) ret = str(int(round(var, width))) if len(ret) <= width: i = width - len(ret) - 1 while i > 0: ret = ('%%.%df' % i) % var if len(ret) <= width and ret != str(int(round(var, width))): break i = i - 1 else: ret = str(int(round(var))) break var = var / base c = c + 1 else: c = -1 return ret, c def tchg(var, width): "Convert time string to given length" ret = '%2dh%02d' % (var / 60, var % 60) if len(ret) > width: ret = '%2dh' % (var / 60) if len(ret) > width: ret = '%2dd' % (var / 60 / 24) if len(ret) > width: ret = '%2dw' % (var / 60 / 24 / 7) return ret def cprintlist(varlist, ctype, width, scale): "Return all columns color printed" ret = sep = '' for var in varlist: ret = ret + sep + cprint(var, ctype, width, scale) sep = char['space'] return ret def cprint(var, ctype = 'f', width = 4, scale = 1000): "Color print one column" base = 1000 if scale == 1024: base = 1024 ### Use units when base is exact 1000 or 1024 unit = False if scale in (1000, 1024) and width >= len(str(base)): unit = True width = width - 1 ### If this is a negative value, return a dash if ctype != 's' and var < 0: if unit: return theme['error'] + '-'.rjust(width) + char['space'] + theme['default'] else: return theme['error'] + '-'.rjust(width) + theme['default'] if base != 1024: units = (char['space'], 'k', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y') elif op.bits and ctype in ('b', ): units = ('b', 'k', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y') base = scale = 1000 var = var * 8.0 else: units = ('B', 'k', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y') if step == op.delay: colors = theme['colors_lo'] ctext = theme['text_lo'] cunit = theme['unit_lo'] cdone = theme['done_lo'] else: colors = theme['colors_hi'] ctext = theme['text_hi'] cunit = theme['unit_hi'] cdone = theme['done_hi'] ### Convert value to string given base and field-length if op.integer and ctype in ('b', 'd', 'p', 'f'): ret, c = dchg(var, width, base) elif op.float and ctype in ('b', 'd', 'p', 'f'): ret, c = fchg(var, width, base) elif ctype in ('b', 'd', 'p'): ret, c = dchg(var, width, base) elif ctype in ('f',): ret, c = fchg(var, width, base) elif ctype in ('s',): ret, c = str(var), ctext elif ctype in ('t',): ret, c = tchg(var, width), ctext else: raise Exception('Type %s not known to dstat.' % ctype) ### Set the counter color if ret == '0': color = cunit elif scale <= 0: color = ctext elif ctype in ('p') and round(var) >= 100.0: color = cdone # elif type in ('p'): # color = colors[int(round(var)/scale)%len(colors)] elif scale not in (1000, 1024): color = colors[int(var/scale)%len(colors)] elif ctype in ('b', 'd', 'f'): color = colors[c%len(colors)] else: color = ctext ### Justify value to left if string if ctype in ('s',): ret = color + ret.ljust(width) else: ret = color + ret.rjust(width) ### Add unit to output if unit: if c != -1 and round(var) != 0: ret += cunit + units[c] else: ret += char['space'] return ret def header(totlist, vislist): "Return the header for a set of module counters" line = '' ### Process title for o in vislist: line += o.title() if o is not vislist[-1]: line += theme['frame'] + char['space'] elif totlist != vislist: line += theme['title'] + char['gt'] line += '\n' ### Process subtitle for o in vislist: line += o.subtitle() if o is not vislist[-1]: line += theme['frame'] + char['pipe'] elif totlist != vislist: line += theme['title'] + char['gt'] return line + '\n' def csvheader(totlist): "Return the CVS header for a set of module counters" line = '' ### Process title for o in totlist: line = line + o.csvtitle() if o is not totlist[-1]: line = line + char['sep'] line += '\n' ### Process subtitle for o in totlist: line = line + o.csvsubtitle() if o is not totlist[-1]: line = line + char['sep'] return line + '\n' def info(level, msg): "Output info message" # if level <= op.verbose: print(msg, file=sys.stderr) def die(ret, msg): "Print error and exit with errorcode" print(msg, file=sys.stderr) exit(ret) def initterm(): "Initialise terminal" global termsize ### Unbuffered sys.stdout # sys.stdout = os.fdopen(1, 'w', 0) termsize = None, 0 try: global fcntl, struct, termios import fcntl, struct, termios termios.TIOCGWINSZ except: try: curses.setupterm() curses.tigetnum('lines'), curses.tigetnum('cols') except: pass else: termsize = None, 2 else: termsize = None, 1 def gettermsize(): "Return the dynamic terminal geometry" global termsize # if not termsize[0] and not termsize[1]: if not termsize[0]: try: if termsize[1] == 1: s = struct.pack('HHHH', 0, 0, 0, 0) x = fcntl.ioctl(sys.stdout.fileno(), termios.TIOCGWINSZ, s) return struct.unpack('HHHH', x)[:2] elif termsize[1] == 2: curses.setupterm() return curses.tigetnum('lines'), curses.tigetnum('cols') else: termsize = (int(os.environ['LINES']), int(os.environ['COLUMNS'])) except: termsize = 25, 80 return termsize def gettermcolor(): "Return whether the system can use colors or not" if sys.stdout.isatty(): try: import curses curses.setupterm() if curses.tigetnum('colors') < 0: return False except ImportError: print('Color support is disabled as python-curses is not installed.', file=sys.stderr) return False except: print('Color support is disabled as curses does not find terminal "%s".' % os.getenv('TERM'), file=sys.stderr) return False return True return False ### We only want to filter out paths, not ksoftirqd/1 def basename(name): "Perform basename on paths only" if name[0] in ('/', '.'): return os.path.basename(name) return name def getnamebypid(pid, name): "Return the name of a process by taking best guesses and exclusion" ret = None try: #### man proc tells me there should be nulls in here, but sometimes it seems like spaces (esp google chrome) # cmdline = open('/proc/%s/cmdline' % pid).read().split(('\0', ' ')) cmdline = linecache.getline('/proc/%s/cmdline' % pid, 1).split(('\0', ' ')) ret = basename(cmdline[0]) if ret in ('bash', 'csh', 'ksh', 'perl', 'python', 'ruby', 'sh'): ret = basename(cmdline[1]) if ret.startswith('-'): ret = basename(cmdline[-2]) if ret.startswith('-'): raise if not ret: raise except: ret = basename(name) return ret def getcpunr(): "Return the number of CPUs in the system" # POSIX try: return os.sysconf("SC_NPROCESSORS_ONLN") except ValueError: pass # Python 2.6+ try: import multiprocessing return multiprocessing.cpu_count() except (ImportError, NotImplementedError): pass # Fallback 1 try: cpunr = open('/proc/cpuinfo', 'r').read().count('processor\t:') if cpunr > 0: return cpunr except IOError: pass # Fallback 2 try: search = re.compile('^cpu\d+') cpunr = 0 for line in dopen('/proc/stat').readlines(): if search.match(line): cpunr += 1 if cpunr > 0: return cpunr except: raise Exception("Problem finding number of CPUs in system.") def blockdevices(): ### We have to replace '!' by '/' to support cciss!c0d0 type devices :-/ return [os.path.basename(filename).replace('!', '/') for filename in glob.glob('/sys/block/*')] ### FIXME: Add scsi support too and improve def sysfs_dev(device): "Convert sysfs device names into device names" m = re.match('ide/host(\d)/bus(\d)/target(\d)/lun(\d)/disc', device) if m: l = m.groups() # ide/host0/bus0/target0/lun0/disc -> 0 -> hda # ide/host0/bus1/target0/lun0/disc -> 2 -> hdc nr = int(l[1]) * 2 + int(l[3]) return 'hd' + chr(ord('a') + nr) m = re.match('cciss/(c\dd\d)', device) if m: l = m.groups() return l[0] m = re.match('placeholder', device) if m: return 'sdX' return device def dev(maj, min): "Convert major/minor pairs into device names" ram = [1, ] ide = [3, 22, 33, 34, 56, 57, 88, 89, 90, 91] loop = [7, ] scsi = [8, 65, 66, 67, 68, 69, 70, 71, 128, 129, 130, 131, 132, 133, 134, 135] md = [9, ] ida = [72, 73, 74, 75, 76, 77, 78, 79] ubd = [98,] cciss = [104,] dm = [253,] if maj in scsi: disc = chr(ord('a') + scsi.index(maj) * 16 + min / 16) part = min % 16 if not part: return 'sd%s' % disc return 'sd%s%d' % (disc, part) elif maj in ide: disc = chr(ord('a') + ide.index(maj) * 2 + min / 64) part = min % 64 if not part: return 'hd%s' % disc return 'hd%s%d' % (disc, part) elif maj in dm: return 'dm-%d' % min elif maj in md: return 'md%d' % min elif maj in loop: return 'loop%d' % min elif maj in ram: return 'ram%d' % min elif maj in cciss: disc = cciss.index(maj) * 16 + min / 16 part = min % 16 if not part: return 'c0d%d' % disc return 'c0d%dp%d' % (disc, part) elif maj in ida: cont = ida.index(maj) disc = min / 16 part = min % 16 if not part: return 'ida%d-%d' % (cont, disc) return 'ida%d-%d-%d' % (cont, disc, part) elif maj in ubd: disc = ubd.index(maj) * 16 + min / 16 part = min % 16 if not part: return 'ubd%d' % disc return 'ubd%d-%d' % (disc, part) else: return 'dev%d-%d' % (maj, min) #def mountpoint(dev): # "Return the mountpoint of a mounted device/file" # for entry in dopen('/etc/mtab').readlines(): # if entry: # devlist = entry.split() # if dev == devlist[0]: # return devlist[1] #def readfile(file): # ret = '' # for line in open(file,'r').readlines(): # ret = ret + line # return ret #cdef extern from "sched.h": # struct sched_param: # int sched_priority # int sched_setscheduler(int pid, int policy,sched_param *p) # #SCHED_FIFO = 1 # #def switchRTCPriority(nb): # cdef sched_param sp # sp.sched_priority = nb # sched_setscheduler (0,SCHED_FIFO , &sp); def listplugins(): plugins = [] remod = re.compile('dstat_(.+)$') for filename in globals(): if filename.startswith('dstat_'): plugins.append(remod.match(filename).groups()[0].replace('_', '-')) remod = re.compile('.+/dstat_(.+).py$') for path in pluginpath: for filename in glob.glob(path + '/dstat_*.py'): plugins.append(remod.match(filename).groups()[0].replace('_', '-')) plugins.sort() return plugins def showplugins(): rows, cols = gettermsize() print('internal:\n\t', end='') remod = re.compile('^dstat_(.+)$') plugins = [] for filename in globals(): if filename.startswith('dstat_'): plugins.append(remod.match(filename).groups()[0].replace('_', '-')) plugins.sort() cols2 = cols - 8 for mod in plugins: cols2 = cols2 - len(mod) - 2 if cols2 <= 0: print('\n\t', end='') cols2 = cols - len(mod) - 10 if mod != plugins[-1]: print(mod, end=',') print(mod) remod = re.compile('.+/dstat_(.+).py$') for path in pluginpath: plugins = [] for filename in glob.glob(path + '/dstat_*.py'): plugins.append(remod.match(filename).groups()[0].replace('_', '-')) if not plugins: continue plugins.sort() cols2 = cols - 8 print('%s:' % os.path.abspath(path), end='\n\t') for mod in plugins: cols2 = cols2 - len(mod) - 2 if cols2 <= 0: print(end='\n\t') cols2 = cols - len(mod) - 10 if mod != plugins[-1]: print(mod, end=',') print(mod) def exit(ret): sys.stdout.write(ansi['reset']) sys.stdout.flush() if op.pidfile and os.path.exists(op.pidfile): os.remove(op.pidfile) if op.profile and os.path.exists(op.profile): rows, cols = gettermsize() import pstats p = pstats.Stats(op.profile) # p.sort_stats('name') # p.print_stats() p.sort_stats('cumulative').print_stats(rows - 13) # p.sort_stats('time').print_stats(rows - 13) # p.sort_stats('file').print_stats('__init__') # p.sort_stats('time', 'cum').print_stats(.5, 'init') # p.print_callees() elif op.profile: print('No profiling data was found, maybe profiler was interrupted ?', file=sys.stderr) sys.exit(ret) def main(): "Initialization of the program, terminal, internal structures" global cpunr, hz, maxint, ownpid, pagesize global ansi, theme, outputfile global totlist, inittime global update, missed cpunr = getcpunr() hz = os.sysconf('SC_CLK_TCK') try: maxint = (sys.maxint + 1) * 2 except: # Support Python 3 maxint = float("inf") ownpid = str(os.getpid()) pagesize = resource.getpagesize() interval = 1 user = getpass.getuser() hostname = os.uname()[1] ### Write term-title if sys.stdout.isatty(): shell = os.getenv('XTERM_SHELL') term = os.getenv('TERM') if shell == '/bin/bash' and term and re.compile('(screen*|xterm*)').match(term): sys.stdout.write('\033]0;(%s@%s) %s %s\007' % (user, hostname.split('.')[0], os.path.basename(sys.argv[0]), ' '.join(op.args))) ### Check background color (rxvt) ### COLORFGBG="15;default;0" # if os.environ['COLORFGBG'] and len(os.environ['COLORFGBG'].split(';')) >= 3: # l = os.environ['COLORFGBG'].split(';') # bg = int(l[2]) # if bg < 7: # print 'Background is dark' # else: # print 'Background is light' # else: # print 'Background is unknown, assuming dark.' ### Check terminal capabilities if op.color == None: op.color = gettermcolor() ### Prepare CSV output file (unbuffered) if op.output: if not os.path.exists(op.output): outputfile = open(op.output, 'w') outputfile.write('"Dstat %s CSV output"\n' % VERSION) header = ('"Author:","Dag Wieers "','','','','"URL:"','"http://dag.wieers.com/home-made/dstat/"\n') outputfile.write(char['sep'].join(header)) else: outputfile = open(op.output, 'a') outputfile.write('\n\n') header = ('"Host:"','"%s"' % hostname,'','','','"User:"','"%s"\n' % user) outputfile.write(char['sep'].join(header)) header = ('"Cmdline:"','"dstat %s"' % ' '.join(op.args),'','','','"Date:"','"%s"\n' % time.strftime('%d %b %Y %H:%M:%S %Z', time.localtime())) outputfile.write(char['sep'].join(header)) ### Create pidfile if op.pidfile: try: pidfile = open(op.pidfile, 'w') pidfile.write(str(os.getpid())) pidfile.close() except Exception as e: print('Failed to create pidfile %s: %s' % (op.pidfile, e), file=sys.stderr) op.pidfile = False ### Empty ansi and theme database if no colors are requested if not op.color: for key in color: color[key] = '' for key in theme: theme[key] = '' for key in ansi: ansi[key] = '' theme['colors_hi'] = (ansi['default'],) theme['colors_lo'] = (ansi['default'],) # print color['blackbg'] ### Disable line-wrapping (does not work ?) sys.stdout.write(ansi['nolinewrap']) if not op.update: interval = op.delay ### Build list of requested plugins linewidth = 0 totlist = [] for plugin in op.plugins: ### Set up fallback lists if plugin == 'cpu': mods = ( 'cpu', 'cpu24' ) elif plugin == 'disk': mods = ( 'disk', 'disk24', 'disk24-old' ) elif plugin == 'int': mods = ( 'int', 'int24' ) elif plugin == 'page': mods = ( 'page', 'page24' ) elif plugin == 'swap': mods = ( 'swap', 'swap-old' ) else: mods = ( plugin, ) for mod in mods: pluginfile = 'dstat_' + mod.replace('-', '_') try: if pluginfile not in globals(): import imp fp, pathname, description = imp.find_module(pluginfile, pluginpath) fp.close() ### TODO: Would using .pyc help with anything ? ### Try loading python plugin if description[0] in ('.py', ): exec(open(pathname).read()) #execfile(pathname) exec('global plug; plug = dstat_plugin(); del(dstat_plugin)') plug.filename = pluginfile plug.check() plug.prepare() ### Try loading C plugin (not functional yet) elif description[0] == '.so': exec('import %s; global plug; plug = %s.new()' % (pluginfile, pluginfile)) plug.check() plug.prepare() # print(dir(plug)) # print(plug.__module__) # print(plug.name) else: print('Module %s is of unknown type.' % pluginfile, file=sys.stderr) else: exec('global plug; plug = %s()' % pluginfile) plug.check() plug.prepare() # print(plug.__module__) except Exception as e: if mod == mods[-1]: print('Module %s failed to load. (%s)' % (pluginfile, e), file=sys.stderr) elif op.debug: print('Module %s failed to load, trying another. (%s)' % (pluginfile, e), file=sys.stderr) if op.debug >= 3: raise # tb = sys.exc_info()[2] continue except: print('Module %s caused unknown exception' % pluginfile, file=sys.stderr) linewidth = linewidth + plug.statwidth() + 1 totlist.append(plug) if op.debug: print('Module %s' % pluginfile, end='') if hasattr(plug, 'file'): print(' requires %s' % plug.file, end='') print() break if not totlist: die(8, 'None of the stats you selected are available.') if op.output: outputfile.write(csvheader(totlist)) scheduler = sched.scheduler(time.time, time.sleep) inittime = time.time() update = 0 missed = 0 ### Let the games begin while update <= op.delay * (op.count-1) or op.count == -1: scheduler.enterabs(inittime + update, 1, perform, (update,)) # scheduler.enter(1, 1, perform, (update,)) scheduler.run() sys.stdout.flush() update = update + interval linecache.clearcache() if op.update: sys.stdout.write('\n') def perform(update): "Inner loop that calculates counters and constructs output" global totlist, oldvislist, vislist, showheader, rows, cols global elapsed, totaltime, starttime global loop, step, missed starttime = time.time() loop = (update - 1 + op.delay) / op.delay step = ((update - 1) % op.delay) + 1 ### Get current time (may be different from schedule) for debugging if not op.debug: curwidth = 0 else: if step == 1 or loop == 0: totaltime = 0 curwidth = 8 ### FIXME: This is temporary functionality, we should do this better ### If it takes longer than 500ms, than warn ! if loop != 0 and starttime - inittime - update > 1: missed = missed + 1 return 0 ### Initialise certain variables if loop == 0: elapsed = ticks() rows, cols = 0, 0 vislist = [] oldvislist = [] showheader = True else: elapsed = step ### FIXME: Make this part smarter if sys.stdout.isatty(): oldcols = cols rows, cols = gettermsize() ### Trim object list to what is visible on screen if oldcols != cols: vislist = [] for o in totlist: newwidth = curwidth + o.statwidth() + 1 if newwidth <= cols or ( vislist == totlist[:-1] and newwidth < cols ): vislist.append(o) curwidth = newwidth ### Check when to display the header if op.header and rows >= 6: if oldvislist != vislist: showheader = True elif not op.update and loop % (rows - 2) == 0: showheader = True elif op.update and step == 1 and loop % (rows - 1) == 0: showheader = True oldvislist = vislist else: vislist = totlist ### Prepare the colors for intermediate updates, last step in a loop is definitive if step == op.delay: theme['default'] = ansi['reset'] else: theme['default'] = theme['text_lo'] ### The first step is to show the definitive line if necessary newline = '' if op.update: if step == 1 and update != 0: newline = '\n' + ansi['reset'] + ansi['clearline'] + ansi['save'] elif loop != 0: newline = ansi['restore'] ### Display header if showheader: if loop == 0 and totlist != vislist: print('Terminal width too small, trimming output.', file=sys.stderr) showheader = False sys.stdout.write(newline) newline = header(totlist, vislist) ### Calculate all objects (visible, invisible) line = newline oline = '' for o in totlist: o.extract() if o in vislist: line = line + o.show() + o.showend(totlist, vislist) if op.output and step == op.delay: oline = oline + o.showcsv() + o.showcsvend(totlist, vislist) ### Print stats sys.stdout.write(line + theme['input']) if op.output and step == op.delay: outputfile.write(oline + '\n') # outputfile.flush() ### Print debugging output if op.debug: totaltime = totaltime + (time.time() - starttime) * 1000.0 if loop == 0: totaltime = totaltime * step if op.debug == 1: sys.stdout.write('%s%6.2fms%s' % (theme['roundtrip'], totaltime / step, theme['input'])) elif op.debug == 2: sys.stdout.write('%s%6.2f %s%d:%d%s' % (theme['roundtrip'], totaltime / step, theme['debug'], loop, step, theme['input'])) elif op.debug > 2: sys.stdout.write('%s%6.2f %s%d:%d:%d%s' % (theme['roundtrip'], totaltime / step, theme['debug'], loop, step, update, theme['input'])) if missed > 0: # sys.stdout.write(' '+theme['error']+'= warn =') sys.stdout.write(' ' + theme['error'] + 'missed ' + str(missed+1) + ' ticks' + theme['input']) missed = 0 ### Finish the line if not op.update: sys.stdout.write('\n') ### Main entrance if __name__ == '__main__': try: initterm() op = Options(os.getenv('DSTAT_OPTS','').split() + sys.argv[1:]) theme = set_theme() if op.profile: import profile if os.path.exists(op.profile): os.remove(op.profile) profile.run('main()', op.profile) else: main() except KeyboardInterrupt as e: if op.update: sys.stdout.write('\n') exit(0) else: op = Options('') step = 1 # vim:ts=4:sw=4:et dstat-0.7.4/dstat.conf000066400000000000000000000026321351755116500146430ustar00rootroot00000000000000### Dstat configuration file ### BEWARE: This file is not yet functional, it's a prototype ### to experiment and find the best syntax for a future dstat [main] interval = 5 diff = 1 colors = true abs = false noheader = true noupdate = true default-options = -cdns unit = k background = light update-method = interval-average # snapshot total-average last-n-average [colors] default = red yellow green blue magenta cyan white darkred darkgreen dark = darkred darkyellow darkgreen darkblue darkmagenta darkcyan silver red green percentage = red yellow green [cpu] show = user sys idle wait [ints] show = 5 9 10 14 15 [disk] show = hda hdc lores hires total [diskset] lores = sd[b-t] hires = sd[u-z] sda[a-d] total = sd[b-z] sda[a-d] [load] show = 1 5 15 [mem] show = used buffers cache free [net] show = bond0 eth0 eth1 #show = bond? eth? [proc] show = run blocked [swap] show = in out [sys] show = int int [custom] load1 = file:///proc/loadavg, line 1, column 1, format %4f load5 = file:///proc/loadavg, line 1, column 2, format %4f load15 = file:///proc/loadavg, line 1, column 3, format %4f int11 = file:///proc/stat, re "^intr ", column 5, format %4d lo-in = file:///proc/net/dev, re "^lo: ", column 3, format %4d lo-out = file:///proc/net/dev, re "^lo: ", column 10, format %4d eth1 = file:///proc/net/dev, re "^eth1: \d+ (\d+) \d+ \d+ \d+ \d+ \d+ \d+ (\d+)", format %4d switch = snmp://127.0.0.1/net.tcp, format %4d dstat-0.7.4/examples/000077500000000000000000000000001351755116500144705ustar00rootroot00000000000000dstat-0.7.4/examples/curstest000077500000000000000000000010111351755116500162630ustar00rootroot00000000000000#!/usr/bin/python import curses, sys #c = curses.wrapper(s) #w = curses.initscr() #curses.start_color() #print "TERM is", curses.termname() #if curses.has_colors(): # print "Has colors"# #print curses.color_pair(curses.COLOR_RED), "Red" #curses.endwin() #curses.setupterm('xterm') curses.setupterm() if sys.stdout.isatty(): print "Is a TTY" print "Size is %sx%s" % (curses.tigetnum('lines'), curses.tigetnum('cols')) if curses.tigetnum('colors') > 0: print "Has colors" print curses.tigetnum('colors') dstat-0.7.4/examples/devtest.py000077500000000000000000000013671351755116500165320ustar00rootroot00000000000000#!/usr/bin/python import sys sys.path.insert(0, '/usr/share/dstat/') import dstat, time devices = ( ( 1, 0, 'ram0'), ( 1, 1, 'ram1'), ( 3, 1, 'hda1'), ( 33, 0, 'hde'), ( 7, 0, 'loop0'), ( 7, 1, 'loop1'), ( 8, 0, '/dev/sda'), ( 8, 1, '/dev/sda1'), ( 8, 18, '/dev/sdb2'), ( 8, 37, '/dev/sdc5'), ( 9, 0, 'md0'), ( 9, 1, 'md1'), ( 9, 2, 'md2'), ( 74, 16, '/dev/ida/c2d1'), ( 77, 241, '/dev/ida/c5d15p1'), ( 98, 0, 'ubd/disc0/disc'), ( 98, 16, 'ubd/disc1/disc'), (104, 0, 'cciss/c0d0'), (104, 2, 'cciss/c0d0p2'), (253, 0, 'dm-0'), (253, 1, 'dm-1'), ) for maj, min, device in devices: print device, '->', dstat.dev(maj, min) dstat-0.7.4/examples/dstat.py000077700000000000000000000000001351755116500174312../dstatustar00rootroot00000000000000dstat-0.7.4/examples/mmpipe.py000077500000000000000000000026261351755116500163420ustar00rootroot00000000000000#!/usr/bin/python import select, sys, os def readpipe(file, tmout = 0.001): "Read available data from pipe" ret = '' while not select.select([file.fileno()], [], [], tmout)[0]: pass while select.select([file.fileno()], [], [], tmout)[0]: ret = ret + file.read(1) return ret.split('\n') def dpopen(cmd): "Open a pipe for reuse, if already opened, return pipes" global pipes if 'pipes' not in globals().keys(): pipes = {} if cmd not in pipes.keys(): try: import subprocess p = subprocess.Popen(cmd, shell=False, bufsize=0, close_fds=True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE) pipes[cmd] = (p.stdin, p.stdout, p.stderr) except ImportError: pipes[cmd] = os.popen3(cmd, 't', 0) return pipes[cmd] ### Unbuffered sys.stdout sys.stdout = os.fdopen(1, 'w', 0) ### Main entrance if __name__ == '__main__': try: # stdin, stdout, stderr = dpopen('/usr/lpp/mmfs/bin/mmpmon -p -s') # stdin.write('reset\n') stdin, stdout, stderr = dpopen('/bin/bash') stdin.write('uname -a\n') readpipe(stdout) while True: # stdin.write('io_s\n') stdin.write('cat /proc/stat\n') for line in readpipe(stdout): print line except KeyboardInterrupt, e: print # vim:ts=4:sw=4 dstat-0.7.4/examples/mstat.py000077500000000000000000000023751351755116500162040ustar00rootroot00000000000000#!/usr/bin/python ### Example2: simple sub-second monitor (ministat) ### This is a quick example showing how to implement your own *stat utility ### If you're interested in such functionality, contact me at dag@wieers.com import sys sys.path.insert(0, '/usr/share/dstat/') import dstat, time ### Set default theme dstat.theme = dstat.set_theme() ### Allow arguments try: delay = float(sys.argv[1]) except: delay = 0.2 try: count = int(sys.argv[2]) except: count = 10 ### Load stats stats = [] dstat.starttime = time.time() dstat.tick = dstat.ticks() for o in (dstat.dstat_epoch(), dstat.dstat_cpu(), dstat.dstat_mem(), dstat.dstat_load(), dstat.dstat_disk(), dstat.dstat_sys()): try: o.check() except Exception, e: print e else: stats.append(o) ### Make time stats sub-second stats[0].format = ('t', 14, 0) ### Print headers title = subtitle = '' for o in stats: title = title + ' ' + o.title() subtitle = subtitle + ' ' + o.subtitle() print '\n' + title + '\n' + subtitle ### Print stats for dstat.update in range(count): line = '' for o in stats: o.extract() line = line + ' ' + o.show() print line + dstat.ansi['reset'] if dstat.update != count-1: time.sleep(delay) dstat.tick = 1 print dstat.ansi['reset'] dstat-0.7.4/examples/read.py000077500000000000000000000016231351755116500157620ustar00rootroot00000000000000#!/usr/bin/python ### Example 1: Direct accessing stats ### This is a quick example showing how you can access dstat data ### If you're interested in this functionality, contact me at dag@wieers.com import sys sys.path.insert(0, '/usr/share/dstat/') import dstat ### Set default theme dstat.theme = dstat.set_theme() clear = dstat.ansi['reset'] dstat.tick = dstat.ticks() c = dstat.dstat_cpu() print c.title() + '\n' + c.subtitle() c.extract() print c.show(), clear print 'Percentage:', c.val['total'] print 'Raw:', c.cn2['total'] print m = dstat.dstat_mem() print m.title() + '\n' + m.subtitle() m.extract() print m.show(), clear print 'Raw:', m.val print l = dstat.dstat_load() print l.title() + '\n' + l.subtitle() l.extract() print l.show(), clear print 'Raw:', l.val print d = dstat.dstat_disk() print d.title() + '\n' + d.subtitle() d.extract() print d.show(), clear print 'Raw:', d.val['total'] print dstat-0.7.4/examples/tdbtest000077500000000000000000000004551351755116500160730ustar00rootroot00000000000000#!/usr/bin/python import sys, tdb db = tdb.tdb('/var/cache/samba/connections.tdb') print db.keys() key=db.firstkey() while key: print db.fetch(key) key=db.nextkey(key) db = tdb.tdb('/var/cache/samba/locking.tdb') print db.keys db = tdb.tdb('/var/cache/samba/sessionid.tdb') print db.keys dstat-0.7.4/packaging/000077500000000000000000000000001351755116500145765ustar00rootroot00000000000000dstat-0.7.4/packaging/rpm/000077500000000000000000000000001351755116500153745ustar00rootroot00000000000000dstat-0.7.4/packaging/rpm/dstat.spec000066400000000000000000000074351351755116500174000ustar00rootroot00000000000000# $Id$ # Authority: dag # Upstream: Dag Wieers Summary: Pluggable real-time performance monitoring tool Name: dstat Version: 0.7.3 Release: 1 License: GPL Group: System Environment/Base URL: http://dag.wieers.com/home-made/dstat/ Source: http://dag.wieers.com/home-made/dstat/dstat-%{version}.tar.bz2 BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-root BuildArch: noarch BuildRequires: python >= 2.0 Requires: python >= 2.0 %description Dstat is a versatile replacement for vmstat, iostat, netstat and ifstat. Dstat overcomes some of their limitations and adds some extra features, more counters and flexibility. Dstat is handy for monitoring systems during performance tuning tests, benchmarks or troubleshooting. Dstat allows you to view all of your system resources in real-time, you can eg. compare disk utilization in combination with interrupts from your IDE controller, or compare the network bandwidth numbers directly with the disk throughput (in the same interval). Dstat gives you detailed selective information in columns and clearly indicates in what magnitude and unit the output is displayed. Less confusion, less mistakes. And most importantly, it makes it very easy to write plugins to collect your own counters and extend in ways you never expected. %prep %setup %build %install %{__rm} -rf %{buildroot} %{__make} install DESTDIR="%{buildroot}" %clean %{__rm} -rf %{buildroot} %files %defattr(-, root, root, 0755) %doc AUTHORS ChangeLog COPYING README TODO docs/*.html docs/*.adoc examples/ %doc %{_mandir}/man1/dstat.1* %{_bindir}/dstat %{_datadir}/dstat/ %changelog * Fri Mar 18 2016 Dag Wieers - 0.7.3-1 - Updated to release 0.7.3. * Tue Jun 15 2010 Dag Wieers - 0.7.2-1 - Updated to release 0.7.2. * Mon Feb 22 2010 Dag Wieers - 0.7.1-1 - Updated to release 0.7.1. * Wed Nov 25 2009 Dag Wieers - 0.7.0-1 - Updated to release 0.7.0. - Reduce the number of paths used for importing modules. {CVE-2009-3894} * Tue Dec 02 2008 Dag Wieers - 0.6.9-1 - Updated to release 0.6.9. * Sun Aug 17 2008 Dag Wieers - 0.6.8-1 - Updated to release 0.6.8. * Tue Feb 26 2008 Dag Wieers - 0.6.7-1 - Updated to release 0.6.7. * Sat Apr 28 2007 Dag Wieers - 0.6.6-1 - Updated to release 0.6.6. * Tue Apr 17 2007 Dag Wieers - 0.6.5-1 - Updated to release 0.6.5. * Tue Dec 12 2006 Dag Wieers - 0.6.4-1 - Updated to release 0.6.4. * Mon Jun 26 2006 Dag Wieers - 0.6.3-1 - Updated to release 0.6.3. * Thu Mar 09 2006 Dag Wieers - 0.6.2-1 - Updated to release 0.6.2. * Mon Sep 05 2005 Dag Wieers - 0.6.1-1 - Updated to release 0.6.1. * Sun May 29 2005 Dag Wieers - 0.6.0-1 - Updated to release 0.6.0. * Fri Apr 08 2005 Dag Wieers - 0.5.10-1 - Updated to release 0.5.10. * Mon Mar 28 2005 Dag Wieers - 0.5.9-1 - Updated to release 0.5.9. * Tue Mar 15 2005 Dag Wieers - 0.5.8-1 - Updated to release 0.5.8. * Fri Dec 31 2004 Dag Wieers - 0.5.7-1 - Updated to release 0.5.7. * Mon Dec 20 2004 Dag Wieers - 0.5.6-1 - Updated to release 0.5.6. * Thu Dec 02 2004 Dag Wieers - 0.5.5-1 - Updated to release 0.5.5. * Thu Nov 25 2004 Dag Wieers - 0.5.4-1 - Updated to release 0.5.4. - Use dstat15 if distribution uses python 1.5. * Sun Nov 21 2004 Dag Wieers - 0.5.3-1 - Updated to release 0.5.3. * Sat Nov 13 2004 Dag Wieers - 0.5.2-1 - Updated to release 0.5.2. * Thu Nov 11 2004 Dag Wieers - 0.5.1-1 - Updated to release 0.5.1. * Tue Oct 26 2004 Dag Wieers - 0.4-1 - Initial package. (using DAR) dstat-0.7.4/packaging/snap/000077500000000000000000000000001351755116500155375ustar00rootroot00000000000000dstat-0.7.4/packaging/snap/python2000077700000000000000000000000001351755116500205532python2.7ustar00rootroot00000000000000dstat-0.7.4/packaging/snap/snapcraft.yaml000066400000000000000000000012501351755116500204020ustar00rootroot00000000000000name: dstat version: 0.7.3 summary: Versatile resource statistic tool description: dstat is a versatile replacement for vmstat, iostat, and ifstat. confinement: strict apps: dstat: command: usr/bin/dstat plugs: [home, system-observe] parts: dstat: plugin: make source: https://github.com/dagwieers/dstat/archive/0.7.3.tar.gz build-packages: [gcc, libc6-dev] stage-packages: [python2.7] python2-symlink: plugin: copy files: python2: usr/bin/ snap: - -usr/lib/gcc - -usr/lib/mime - -usr/lib/x86_64-linux-gnu - -lib/x86_64-linux-gnu - usr/bin/python2 - usr/bin/python2.7 - usr/bin/dstat dstat-0.7.4/plugins/000077500000000000000000000000001351755116500143335ustar00rootroot00000000000000dstat-0.7.4/plugins/dstat_battery.py000066400000000000000000000051301351755116500175550ustar00rootroot00000000000000### Author: Dag Wieers ### Author: Sven-Hendrik Haase class dstat_plugin(dstat): """ Percentage of remaining battery power as reported by ACPI. """ def __init__(self): self.name = 'battery' self.type = 'p' self.width = 4 self.scale = 34 self.battery_type = "none" def check(self): if os.path.exists('/proc/acpi/battery/'): self.battery_type = "procfs" elif glob.glob('/sys/class/power_supply/BAT*'): self.battery_type = "sysfs" else: raise Exception('No ACPI battery information found.') def vars(self): ret = [] if self.battery_type == "procfs": for battery in os.listdir('/proc/acpi/battery/'): for line in dopen('/proc/acpi/battery/'+battery+'/state').readlines(): l = line.split() if len(l) < 2: continue if l[0] == 'present:' and l[1] == 'yes': ret.append(battery) elif self.battery_type == "sysfs": for battery in glob.glob('/sys/class/power_supply/BAT*'): for line in dopen(battery+'/present').readlines(): if int(line[0]) == 1: ret.append(os.path.basename(battery)) ret.sort() return ret def nick(self): return [name.lower() for name in self.vars] def extract(self): for battery in self.vars: if self.battery_type == "procfs": for line in dopen('/proc/acpi/battery/'+battery+'/info').readlines(): l = line.split() if len(l) < 4: continue if l[0] == 'last': full = int(l[3]) break for line in dopen('/proc/acpi/battery/'+battery+'/state').readlines(): l = line.split() if len(l) < 3: continue if l[0] == 'remaining': current = int(l[2]) break if current: self.val[battery] = current * 100.0 / full else: self.val[battery] = -1 elif self.battery_type == "sysfs": for line in dopen('/sys/class/power_supply/'+battery+'/capacity').readlines(): current = int(line) break if current: self.val[battery] = current else: self.val[battery] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_battery_remain.py000066400000000000000000000026261351755116500211170ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Remaining battery time. Calculated from power drain and remaining battery power. Information is retrieved from ACPI. """ def __init__(self): self.name = 'remain' self.type = 't' self.width = 5 self.scale = 0 def vars(self): ret = [] for battery in os.listdir('/proc/acpi/battery/'): for line in dopen('/proc/acpi/battery/'+battery+'/state').readlines(): l = line.split() if len(l) < 2: continue if l[0] == 'present:' and l[1] == 'yes': ret.append(battery) ret.sort() return ret def nick(self): return [name.lower() for name in self.vars] def extract(self): for battery in self.vars: for line in dopen('/proc/acpi/battery/'+battery+'/state').readlines(): l = line.split() if len(l) < 3: continue if l[0:2] == ['remaining', 'capacity:']: remaining = int(l[2]) continue elif l[0:2] == ['present', 'rate:']: rate = int(l[2]) continue if rate and remaining: self.val[battery] = remaining * 60 / rate else: self.val[battery] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_condor_queue.py000066400000000000000000000103621351755116500205760ustar00rootroot00000000000000### Author: ### Condor queue plugin ### Display information about jobs in queue (using condor_q(1)) ### ### WARNING: with many jobs in the queue, the condor_q might take quite ### some time to execute and use quite a bit of resources. Consider ### using a longer delay. import os import re global condor_classad class condor_classad: """ Utility class to work with Condor ClassAds """ global ATTR_VAR_PATTERN ATTR_VAR_PATTERN = re.compile(r'\$\((\w+)\)') def __init__(self, file=None, config=None): if file != None: self.attributes = condor_classad._read_from_file(file) elif config != None: self.attributes = condor_classad._parse(config) if self.attributes == None: raise Exception('condor_config must be initialized either using a file or config text') local_config_file = self['LOCAL_CONFIG_FILE'] if local_config_file != None: for k,v in condor_classad._read_from_file(local_config_file).items(): self.attributes[k] = v def __getitem__(self, name): if name in self.attributes: self._expand(name) return self.attributes[name] def _expand(self, var): if not var in self.attributes: return while True: m = ATTR_VAR_PATTERN.match(self.attributes[var]) if m == None: break var_name = m.group(1) self.attributes[var] = ATTR_VAR_PATTERN.sub(self.attributes[var_name], self.attributes[var]) @staticmethod def _parse(text): attributes = {} for l in [l for l in text.split('\n') if not l.strip().startswith('#')]: l = l.split('=') if len(l) <= 1 or len(l[0]) == 0: continue attributes[l[0].strip()] = ''.join(l[1:]).strip() return attributes @staticmethod def _read_from_file(filename): if not os.access(filename, os.R_OK): raise Exception('Unable to read file %s' % filename) try: f = open(filename) return condor_classad._parse((f.read())) finally: f.close() class dstat_plugin(dstat): """ Plugin for Condor queue stats """ global CONDOR_Q_STAT_PATTER CONDOR_Q_STAT_PATTER = re.compile(r'(\d+) jobs; (\d+) idle, (\d+) running, (\d+) held') def __init__(self): self.name = 'condor queue' self.vars = ('jobs', 'idle', 'running', 'held') self.type = 'd' self.width = 5 self.scale = 1 self.condor_config = None def check(self): config_file = os.environ['CONDOR_CONFIG'] if config_file == None: raise Exception('Environment varibale CONDOR_CONFIG is missing') self.condor_config = condor_classad(config_file) bin_dir = self.condor_config['BIN'] if bin_dir == None: raise Exception('Unable to find BIN directory in condor config file %s' % config_file) self.condor_status_cmd = os.path.join(bin_dir, 'condor_q') if not os.access(self.condor_status_cmd, os.X_OK): raise Exception('Needs %s in the path' % self.condor_status_cmd) else: try: p = os.popen(self.condor_status_cmd+' 2>&1 /dev/null') ret = p.close() if ret: raise Exception('Cannot interface with Condor - condor_q returned != 0?') except IOError: raise Exception('Unable to execute %s' % self.condor_status_cmd) return True def extract(self): last_line = None try: for repeats in range(3): for last_line in cmd_readlines(self.condor_status_cmd): pass m = CONDOR_Q_STAT_PATTER.match(last_line) if m == None: raise Exception('Invalid output from %s. Got: %s' % (cmd, last_line)) stats = [int(s.strip()) for s in m.groups()] for i,j in enumerate(self.vars): self.val[j] = stats[i] except Exception: for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_cpufreq.py000066400000000000000000000032001351755116500175440ustar00rootroot00000000000000### Author: dag@wieers.com class dstat_plugin(dstat): """ CPU frequency in percentage as reported by ACPI. """ def __init__(self): self.name = 'frequency' self.type = 'p' self.width = 4 self.scale = 34 def check(self): for cpu in glob.glob('/sys/devices/system/cpu/cpu[0-9]*'): if not os.access(cpu+'/cpufreq/scaling_cur_freq', os.R_OK): raise Exception('Cannot access acpi %s frequency information' % os.path.basename(cpu)) def vars(self): ret = [] for name in glob.glob('/sys/devices/system/cpu/cpu[0-9]*'): ret.append(os.path.basename(name)) ret.sort() return ret # return os.listdir('/sys/devices/system/cpu/') def nick(self): return [name.lower() for name in self.vars] def extract(self): for cpu in self.vars: for line in dopen('/sys/devices/system/cpu/'+cpu+'/cpufreq/scaling_max_freq').readlines(): l = line.split() max = int(l[0]) for line in dopen('/sys/devices/system/cpu/'+cpu+'/cpufreq/scaling_cur_freq').readlines(): l = line.split() cur = int(l[0]) ### Need to close because of bug in sysfs (?) dclose('/sys/devices/system/cpu/'+cpu+'/cpufreq/scaling_cur_freq') self.set1[cpu] = self.set1[cpu] + cur * 100.0 / max if op.update: self.val[cpu] = self.set1[cpu] / elapsed else: self.val[cpu] = self.set1[cpu] if step == op.delay: self.set1[cpu] = 0 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_dbus.py000066400000000000000000000026301351755116500170420ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Number of active dbus sessions. """ def __init__(self): self.name = 'dbus' self.nick = ('sys', 'ses') self.vars = ('system', 'session') self.type = 'd' self.width = 3 self.scale = 100 def check(self): # dstat.info(1, 'The dbus module is an EXPERIMENTAL module.') try: global dbus import dbus try: self.sysbus = dbus.Bus(dbus.Bus.TYPE_SYSTEM).get_service('org.freedesktop.DBus').get_object('/org/freedesktop/DBus', 'org.freedesktop.DBus') try: self.sesbus = dbus.Bus(dbus.Bus.TYPE_SESSION).get_service('org.freedesktop.DBus').get_object('/org/freedesktop/DBus', 'org.freedesktop.DBus') except: self.sesbus = None except: raise Exception('Unable to connect to dbus message bus') except: raise Exception('Needs python-dbus module') def extract(self): self.val['system'] = len(self.sysbus.ListServices()) - 1 try: self.val['session'] = len(self.sesbus.ListServices()) - 1 except: self.val['session'] = -1 # print(dir(b)); print(dir(s)); print(dir(d)); print(d.ListServices()) # print(dir(d)) # print(d.ListServices()) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_disk_avgqu.py000066400000000000000000000041061351755116500202420ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ The average queue length of the requests that were issued to the device. """ def __init__(self): self.version = 2 self.nick = ('avgqu',) self.type = 'f' self.width = 4 self.scale = 10 self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/diskstats') self.cols = 1 self.struct = dict( rq_ticks=0 ) def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue name = l[2] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception('No suitable block devices found to monitor') return ret def vars(self): ret = [] if op.disklist: varlist = op.disklist else: varlist = [] blockdevices = [os.path.basename(filename) for filename in glob.glob('/sys/block/*')] for name in self.discover: if self.diskfilter.match(name): continue if name not in blockdevices: continue varlist.append(name) varlist.sort() for name in varlist: if name in self.discover: ret.append(name) return ret def name(self): return self.vars def extract(self): for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue if l[3] == '0' and l[7] == '0': continue name = l[2] if name not in self.vars or name == 'total': continue self.set2[name] = dict( rq_ticks = int(l[13]), ) for name in self.vars: self.val[name] = ( ( self.set2[name]['rq_ticks'] - self.set1[name]['rq_ticks'] ) * 1.0 / elapsed / 1000, ) if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_disk_avgrq.py000066400000000000000000000046711351755116500202460ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ The average size (in sectors) of the requests that were issued to the device. """ def __init__(self): self.version = 2 self.nick = ('avgrq',) self.type = 'f' self.width = 4 self.scale = 10 self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/diskstats') self.cols = 1 self.struct = dict( nr_ios=0, rd_sect=0, wr_sect=0 ) def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue name = l[2] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception('No suitable block devices found to monitor') return ret def vars(self): ret = [] if op.disklist: varlist = op.disklist else: varlist = [] blockdevices = [os.path.basename(filename) for filename in glob.glob('/sys/block/*')] for name in self.discover: if self.diskfilter.match(name): continue if name not in blockdevices: continue varlist.append(name) varlist.sort() for name in varlist: if name in self.discover: ret.append(name) return ret def name(self): return self.vars def extract(self): for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue if l[3] == '0' and l[7] == '0': continue name = l[2] if name not in self.vars or name == 'total': continue self.set2[name] = dict( nr_ios = int(l[3])+int(l[7]), rd_sect = int(l[9]), wr_sect = int(l[11]), ) for name in self.vars: tput = ( self.set2[name]['nr_ios'] - self.set1[name]['nr_ios'] ) if tput: ticks = self.set2[name]['rd_sect'] - self.set1[name]['rd_sect'] + \ self.set2[name]['wr_sect'] - self.set1[name]['wr_sect'] self.val[name] = ( ticks * 1.0 / tput, ) else: self.val[name] = ( 0.0, ) if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_disk_svctm.py000066400000000000000000000046751351755116500202660ustar00rootroot00000000000000### Author: David Nicklay ### Modified from disk-util: Dag Wieers class dstat_plugin(dstat): """ The average service time (in milliseconds) for I/O requests that were issued to the device. Warning! Do not trust this field any more. """ def __init__(self): self.version = 2 self.nick = ('svctm',) self.type = 'f' self.width = 4 self.scale = 1 self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/diskstats') self.cols = 1 self.struct = dict( nr_ios=0, tot_ticks=0 ) def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue name = l[2] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception('No suitable block devices found to monitor') return ret def vars(self): ret = [] if op.disklist: varlist = op.disklist else: varlist = [] blockdevices = [os.path.basename(filename) for filename in glob.glob('/sys/block/*')] for name in self.discover: if self.diskfilter.match(name): continue if name not in blockdevices: continue varlist.append(name) varlist.sort() for name in varlist: if name in self.discover: ret.append(name) return ret def name(self): return self.vars def extract(self): for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue if l[3] == '0' and l[7] == '0': continue name = l[2] if name not in self.vars or name == 'total': continue self.set2[name] = dict( nr_ios = int(l[3])+int(l[7]), tot_ticks = int(l[12]), ) for name in self.vars: tput = ( self.set2[name]['nr_ios'] - self.set1[name]['nr_ios'] ) if tput: util = ( self.set2[name]['tot_ticks'] - self.set1[name]['tot_ticks'] ) self.val[name] = ( util * 1.0 / tput, ) else: self.val[name] = ( 0.0, ) if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_disk_tps.py000066400000000000000000000052311351755116500177250ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Number of read and write transactions per device. Displays the number of read and write I/O transactions per device. """ def __init__(self): self.nick = ('#read', '#writ' ) self.type = 'd' self.width = 5 self.scale = 1000 self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/diskstats') self.cols = 2 def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue name = l[2] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception('No suitable block devices found to monitor') return ret def vars(self): ret = [] if op.disklist: varlist = op.disklist elif not op.full: varlist = ('total',) else: varlist = [] for name in self.discover: if self.diskfilter.match(name): continue if name not in blockdevices(): continue varlist.append(name) # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover + ['total'] or name in op.diskset: ret.append(name) return ret def name(self): return ['dsk/'+sysfs_dev(name) for name in self.vars] def extract(self): for name in self.vars: self.set2[name] = (0, 0) for l in self.splitlines(): if len(l) < 13: continue if l[3] == '0' and l[7] == '0': continue if l[3:] == ['0',] * 11: continue name = l[2] if not self.diskfilter.match(name): self.set2['total'] = ( self.set2['total'][0] + int(l[3]), self.set2['total'][1] + int(l[7]) ) if name in self.vars and name != 'total': self.set2[name] = ( self.set2[name][0] + int(l[3]), self.set2[name][1] + int(l[7])) for diskset in self.vars: if diskset in op.diskset: for disk in op.diskset[diskset]: if re.match('^'+disk+'$', name): self.set2[diskset] = ( self.set2[diskset][0] + int(l[3]), self.set2[diskset][1] + int(l[7]) ) for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_disk_util.py000066400000000000000000000061031351755116500200730ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Percentage of bandwidth utilization for block devices. Displays percentage of CPU time during which I/O requests were issued to the device (bandwidth utilization for the device). Device saturation occurs when this value is close to 100%. """ def __init__(self): self.nick = ('util', ) self.type = 'f' self.width = 4 self.scale = 34 self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/diskstats') self.cols = 1 self.struct = dict( tot_ticks=0 ) def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue name = l[2] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception('No suitable block devices found to monitor') return ret def basename(self, disk): "Strip /dev/ and convert symbolic link" if disk[:5] == '/dev/': # file or symlink if os.path.exists(disk): # e.g. /dev/disk/by-uuid/15e40cc5-85de-40ea-b8fb-cb3a2eaf872 if os.path.islink(disk): target = os.readlink(disk) # convert relative pathname to absolute if target[0] != '/': target = os.path.join(os.path.dirname(disk), target) target = os.path.normpath(target) print('dstat: symlink %s -> %s' % (disk, target)) disk = target # trim leading /dev/ return disk[5:] else: print('dstat: %s does not exist' % disk) else: return disk def vars(self): ret = [] if op.disklist: varlist = list(map(self.basename, op.disklist)) else: varlist = [] for name in self.discover: if self.diskfilter.match(name): continue if name not in blockdevices(): continue varlist.append(name) # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover: ret.append(name) return ret def name(self): return [sysfs_dev(name) for name in self.vars] def extract(self): for l in self.splitlines(): if len(l) < 13: continue if l[5] == '0' and l[9] == '0': continue if l[3:] == ['0',] * 11: continue name = l[2] if name not in self.vars: continue self.set2[name] = dict( tot_ticks = int(l[12]) ) for name in self.vars: self.val[name] = ( (self.set2[name]['tot_ticks'] - self.set1[name]['tot_ticks']) * 1.0 * hz / elapsed / 1000, ) if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_disk_wait.py000066400000000000000000000053211351755116500200630ustar00rootroot00000000000000### Author: David Nicklay ### Modified from disk-util: Dag Wieers class dstat_plugin(dstat): """ Read and Write average wait times of block devices. Displays the average read and write wait times of block devices """ def __init__(self): self.nick = ('rawait', 'wawait') self.type = 'f' self.width = 4 self.scale = 1 self.diskfilter = re.compile('^([hsv]d[a-z]+\d+|cciss/c\d+d\d+p\d+|dm-\d+|md\d+|mmcblk\d+p\d0|VxVM\d+)$') self.open('/proc/diskstats') self.cols = 1 self.struct = dict( rd_ios=0, wr_ios=0, rd_ticks=0, wr_ticks=0 ) def discover(self, *objlist): ret = [] for l in self.splitlines(): if len(l) < 13: continue if l[3:] == ['0',] * 11: continue name = l[2] ret.append(name) for item in objlist: ret.append(item) if not ret: raise Exception('No suitable block devices found to monitor') return ret def vars(self): ret = [] if op.disklist: varlist = op.disklist else: varlist = [] blockdevices = [os.path.basename(filename) for filename in glob.glob('/sys/block/*')] for name in self.discover: if self.diskfilter.match(name): continue if name not in blockdevices: continue varlist.append(name) varlist.sort() for name in varlist: if name in self.discover: ret.append(name) return ret def name(self): return self.vars def extract(self): for l in self.splitlines(): if len(l) < 13: continue if l[5] == '0' and l[9] == '0': continue if l[3:] == ['0',] * 11: continue name = l[2] if name not in self.vars: continue self.set2[name] = dict( rd_ios = int(l[3]), wr_ios = int(l[7]), rd_ticks = int(l[6]), wr_ticks = int(l[10]), ) for name in self.vars: rd_tput = self.set2[name]['rd_ios'] - self.set1[name]['rd_ios'] wr_tput = self.set2[name]['wr_ios'] - self.set1[name]['wr_ios'] if rd_tput: rd_wait = ( self.set2[name]['rd_ticks'] - self.set1[name]['rd_ticks'] ) * 1.0 / rd_tput else: rd_wait = 0 if wr_tput: wr_wait = ( self.set2[name]['wr_ticks'] - self.set1[name]['wr_ticks'] ) * 1.0 / wr_tput else: wr_wait = 0 self.val[name] = ( rd_wait, wr_wait ) if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_dstat.py000066400000000000000000000020321351755116500172200ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Provide more information related to the dstat process. The dstat cputime is the total cputime dstat requires per second. On a system with one cpu and one core, the total cputime is 1000ms. On a system with 2 cores the total is 2000ms. It may help to vizualise the performance of Dstat and its selection of plugins. """ def __init__(self): self.name = 'dstat' self.vars = ('cputime', 'latency') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/%s/schedstat' % ownpid) def extract(self): l = self.splitline() # l = linecache.getline('/proc/%s/schedstat' % self.pid, 1).split() self.set2['cputime'] = int(l[0]) self.set2['latency'] = int(l[1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_dstat_cpu.py000066400000000000000000000022511351755116500200720ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Provide CPU information related to the dstat process. This plugin shows the CPU utilization for the dstat process itself, including the user-space and system-space (kernel) utilization and a total of both. On a system with one cpu and one core, the total cputime is 1000ms. On a system with 2 cores the total is 2000ms. It may help to vizualise the performance of Dstat and its selection of plugins. """ def __init__(self): self.name = 'dstat cpu' self.vars = ('user', 'system', 'total') self.nick = ('usr', 'sys', 'tot') self.type = 'p' self.width = 3 self.scale = 100 def extract(self): res = resource.getrusage(resource.RUSAGE_SELF) self.set2['user'] = float(res.ru_utime) self.set2['system'] = float(res.ru_stime) self.set2['total'] = float(res.ru_utime) + float(res.ru_stime) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 100.0 / elapsed / cpunr if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_dstat_ctxt.py000066400000000000000000000020031351755116500202600ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Provide Dstat's number of voluntary and involuntary context switches. This plugin provides a unique view of the number of voluntary and involuntary context switches of the Dstat process itself. It may help to vizualise the performance of Dstat and its selection of plugins. """ def __init__(self): self.name = 'contxt sw' self.vars = ('voluntary', 'involuntary', 'total') self.type = 'd' self.width = 3 self.scale = 100 def extract(self): res = resource.getrusage(resource.RUSAGE_SELF) self.set2['voluntary'] = float(res.ru_nvcsw) self.set2['involuntary'] = float(res.ru_nivcsw) self.set2['total'] = (float(res.ru_nvcsw) + float(res.ru_nivcsw)) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_dstat_mem.py000066400000000000000000000021171351755116500200620ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Provide memory information related to the dstat process. The various values provide information about the memory usage of the dstat process. This plugin gives you the possibility to follow memory usage changes of dstat over time. It may help to vizualise the performance of Dstat and its selection of plugins. """ def __init__(self): self.name = 'dstat memory usage' self.vars = ('virtual', 'resident', 'shared', 'data') self.type = 'd' self.open('/proc/%s/statm' % ownpid) def extract(self): l = self.splitline() # l = linecache.getline('/proc/%s/schedstat' % self.pid, 1).split() self.val['virtual'] = int(l[0]) * pagesize / 1024 self.val['resident'] = int(l[1]) * pagesize / 1024 self.val['shared'] = int(l[2]) * pagesize / 1024 # self.val['text'] = int(l[3]) * pagesize / 1024 # self.val['library'] = int(l[4]) * pagesize / 1024 self.val['data'] = int(l[5]) * pagesize / 1024 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_fan.py000066400000000000000000000014751351755116500166570ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Fan speed in RPM (rotations per minute) as reported by ACPI. """ def __init__(self): self.name = 'fan' self.type = 'd' self.width = 4 self.scale = 500 self.open('/proc/acpi/ibm/fan') def vars(self): ret = None for l in self.splitlines(): if l[0] == 'speed:': ret = ('speed',) return ret def check(self): if not os.path.exists('/proc/acpi/ibm/fan'): raise Exception('Needs kernel IBM-ACPI support') def extract(self): if os.path.exists('/proc/acpi/ibm/fan'): for l in self.splitlines(): if l[0] == 'speed:': self.val['speed'] = int(l[1]) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_freespace.py000077500000000000000000000031651351755116500200510ustar00rootroot00000000000000### Author: Dag Wieers ### FIXME: This module needs infrastructure to provide a list of mountpoints ### FIXME: Would be nice to have a total by default (half implemented) class dstat_plugin(dstat): """ Amount of used and free space per mountpoint. """ def __init__(self): self.nick = ('used', 'free') self.open('/etc/mtab') self.cols = 2 def vars(self): ret = [] for l in self.splitlines(): if len(l) < 6: continue if l[2] in ('binfmt_misc', 'devpts', 'iso9660', 'none', 'proc', 'sysfs', 'usbfs', 'cgroup', 'tmpfs', 'devtmpfs', 'debugfs', 'mqueue', 'systemd-1', 'rootfs', 'autofs'): continue ### FIXME: Excluding 'none' here may not be what people want (/dev/shm) if l[0] in ('devpts', 'none', 'proc', 'sunrpc', 'usbfs', 'securityfs', 'hugetlbfs', 'configfs', 'selinuxfs', 'pstore', 'nfsd'): continue name = l[1] res = os.statvfs(name) if res[0] == 0: continue ### Skip zero block filesystems ret.append(name) #print(l[0] + " / " + name + " / " + l[2]) return ret def name(self): return ['/' + os.path.basename(name) for name in self.vars] def extract(self): self.val['total'] = (0, 0) for name in self.vars: res = os.statvfs(name) self.val[name] = ( (float(res.f_blocks) - float(res.f_bavail)) * int(res.f_frsize), float(res.f_bavail) * float(res.f_frsize) ) self.val['total'] = (self.val['total'][0] + self.val[name][0], self.val['total'][1] + self.val[name][1]) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_fuse.py000066400000000000000000000025311351755116500170470ustar00rootroot00000000000000### Author: Vikas Gorur (http://github.com/vikasgorur) class dstat_plugin(dstat): """ Waiting calls on mounted FUSE filesystems Displays the number of waiting calls on all mounted FUSE filesystems. """ def __init__(self): self.name = 'fuse' self.type = 'd' self.fusectl_path = "/sys/fs/fuse/connections/" self.dirs = [] def check(self): info(1, "Module %s is still experimental." % self.filename) if not os.path.exists(self.fusectl_path): raise Exception('%s not mounted' % self.fusectl_path) if len(os.listdir(self.fusectl_path)) == 0: raise Exception('No fuse filesystems mounted') def vars(self): self.dirs = os.listdir(self.fusectl_path) atleast_one_ok = False for d in self.dirs: if os.access(self.fusectl_path + d + "/waiting", os.R_OK): atleast_one_ok = True if not atleast_one_ok: raise Exception('User is not root or no fuse filesystems mounted') return self.dirs def extract(self): for d in self.dirs: path = self.fusectl_path + d + "/waiting" if os.path.exists(path): line = dopen(path).readline() self.val[d] = int(line) else: self.val[d] = 0 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_gpfs.py000066400000000000000000000031261351755116500170450ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Total amount of read and write throughput (in bytes) on a GPFS filesystem. """ def __init__(self): self.name = 'gpfs i/o' self.nick = ('read', 'write') self.vars = ('_br_', '_bw_') def check(self): if os.access('/usr/lpp/mmfs/bin/mmpmon', os.X_OK): try: self.stdin, self.stdout, self.stderr = dpopen('/usr/lpp/mmfs/bin/mmpmon -p -s') self.stdin.write('reset\n') readpipe(self.stdout) except IOError: raise Exception('Cannot interface with gpfs mmpmon binary') return True raise Exception('Needs GPFS mmpmon binary') def extract(self): try: self.stdin.write('io_s\n') # readpipe(self.stderr) for line in readpipe(self.stdout): if not line: continue l = line.split() for name in self.vars: self.set2[name] = int(l[l.index(name)+1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed except IOError as e: if op.debug > 1: print('%s: lost pipe to mmpmon, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_gpfs_ops.py000066400000000000000000000033271351755116500177310ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Number of operations performed on a GPFS filesystem. """ def __init__(self): self.name = 'gpfs file operations' self.nick = ('open', 'clos', 'read', 'writ', 'rdir', 'inod') self.vars = ('_oc_', '_cc_', '_rdc_', '_wc_', '_dir_', '_iu_') self.type = 'd' self.width = 5 self.scale = 1000 def check(self): if os.access('/usr/lpp/mmfs/bin/mmpmon', os.X_OK): try: self.stdin, self.stdout, self.stderr = dpopen('/usr/lpp/mmfs/bin/mmpmon -p -s') self.stdin.write('reset\n') readpipe(self.stdout) except IOError: raise Exception('Cannot interface with gpfs mmpmon binary') return True raise Exception('Needs GPFS mmpmon binary') def extract(self): try: self.stdin.write('io_s\n') # readpipe(self.stderr) for line in readpipe(self.stdout): if not line: continue l = line.split() for name in self.vars: self.set2[name] = int(l[l.index(name)+1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed except IOError as e: if op.debug > 1: print('%s: lost pipe to mmpmon, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_helloworld.py000066400000000000000000000006641351755116500202650ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Example "Hello world!" output plugin for aspiring Dstat developers. """ def __init__(self): self.name = 'plugin title' self.nick = ('counter',) self.vars = ('text',) self.type = 's' self.width = 12 self.scale = 0 def extract(self): self.val['text'] = 'Hello world!' # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_ib.py000066400000000000000000000061041351755116500164770ustar00rootroot00000000000000### Author: Dmitry Fedin class dstat_plugin(dstat): ibdirname = '/sys/class/infiniband' """ Bytes received or sent through infiniband/RoCE interfaces Usage: dstat --ib -N :,total default dstat --ib is the same as dstat --ib -N total example for Mellanox adapter, transfering data via port 2 dstat --ib -Nmlx4_0:2 """ def __init__(self): self.nick = ('recv', 'send') self.type = 'd' self.cols = 2 self.width = 6 def discover(self, *objlist): ret = [] for subdirname in os.listdir(self.ibdirname): if not os.path.isdir(os.path.join(self.ibdirname,subdirname)) : continue device_dir = os.path.join(self.ibdirname, subdirname, 'ports') for subdirname2 in os.listdir(device_dir) : if not os.path.isdir(os.path.join(device_dir,subdirname2)): continue name = subdirname + ":" + subdirname2 ret.append(name) ret.sort() for item in objlist: ret.append(item) return ret def vars(self): ret = [] if op.netlist: varlist = op.netlist elif not op.full: varlist = ('total',) else: varlist = self.discover varlist.sort() for name in varlist: if name in self.discover + ['total']: ret.append(name) if not ret: raise Exception('No suitable network interfaces found to monitor') return ret def name(self): return ['ib/'+name for name in self.vars] def extract(self): self.set2['total'] = [0, 0] ifaces = self.discover for name in self.vars: self.set2[name] = [0, 0] for name in ifaces: l=name.split(':'); if len(l) < 2: continue rcv_counter_name=os.path.join('/sys/class/infiniband', l[0], 'ports', l[1], 'counters_ext/port_rcv_data_64') xmit_counter_name=os.path.join('/sys/class/infiniband', l[0], 'ports', l[1], 'counters_ext/port_xmit_data_64') rcv_lines = dopen(rcv_counter_name).readlines() xmit_lines = dopen(xmit_counter_name).readlines() if len(rcv_lines) < 1 or len(xmit_lines) < 1: continue rcv_value = int(rcv_lines[0]) xmit_value = int(xmit_lines[0]) if name in self.vars : self.set2[name] = (rcv_value, xmit_value) self.set2['total'] = ( self.set2['total'][0] + rcv_value, self.set2['total'][1] + xmit_value) if update: for name in self.set2: self.val[name] = [ (self.set2[name][0] - self.set1[name][0]) * 4.0 / elapsed, (self.set2[name][1] - self.set1[name][1]) * 4.0/ elapsed, ] if self.val[name][0] < 0: self.val[name][0] += maxint + 1 if self.val[name][1] < 0: self.val[name][1] += maxint + 1 if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_innodb_buffer.py000066400000000000000000000031341351755116500207070ustar00rootroot00000000000000### Author: Dag Wieers global mysql_options mysql_options = os.getenv('DSTAT_MYSQL') class dstat_plugin(dstat): def __init__(self): self.name = 'innodb pool' self.nick = ('crt', 'rea', 'wri') self.vars = ('created', 'read', 'written') self.type = 'f' self.width = 3 self.scale = 1000 def check(self): if not os.access('/usr/bin/mysql', os.X_OK): raise Exception('Needs MySQL binary') try: self.stdin, self.stdout, self.stderr = dpopen('/usr/bin/mysql -n %s' % mysql_options) except IOError as e: raise Exception('Cannot interface with MySQL binary (%s)' % e) def extract(self): try: self.stdin.write('show engine innodb status\G\n') line = greppipe(self.stdout, 'Pages read ') if line: l = line.split() self.set2['read'] = int(l[2].rstrip(',')) self.set2['created'] = int(l[4].rstrip(',')) self.set2['written'] = int(l[6]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except IOError as e: if op.debug > 1: print('%s: lost pipe to mysql, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception: %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_innodb_io.py000066400000000000000000000031221351755116500200420ustar00rootroot00000000000000### Author: Dag Wieers global mysql_options mysql_options = os.getenv('DSTAT_MYSQL') class dstat_plugin(dstat): def __init__(self): self.name = 'innodb io ops ' self.nick = ('rea', 'wri', 'syn') self.vars = ('read', 'write', 'sync') self.type = 'f' self.width = 3 self.scale = 1000 def check(self): if os.access('/usr/bin/mysql', os.X_OK): try: self.stdin, self.stdout, self.stderr = dpopen('/usr/bin/mysql -n %s' % mysql_options) except IOError: raise Exception('Cannot interface with MySQL binary') return True raise Exception('Needs MySQL binary') def extract(self): try: self.stdin.write('show engine innodb status\G\n') line = matchpipe(self.stdout, '.*OS file reads,.*') if line: l = line.split() self.set2['read'] = int(l[0]) self.set2['write'] = int(l[4]) self.set2['sync'] = int(l[8]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except IOError as e: if op.debug > 1: print('%s: lost pipe to mysql, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception' % (self.filename, e)) for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_innodb_ops.py000066400000000000000000000033061351755116500202400ustar00rootroot00000000000000### Author: Dag Wieers global mysql_options mysql_options = os.getenv('DSTAT_MYSQL') class dstat_plugin(dstat): def __init__(self): self.name = 'innodb ops' self.nick = ('ins', 'upd', 'del', 'rea') self.vars = ('inserted', 'updated', 'deleted', 'read') self.type = 'f' self.width = 3 self.scale = 1000 def check(self): if os.access('/usr/bin/mysql', os.X_OK): try: self.stdin, self.stdout, self.stderr = dpopen('/usr/bin/mysql -n %s' % mysql_options) except IOError: raise Exception('Cannot interface with MySQL binary') return True raise Exception('Needs MySQL binary') def extract(self): try: self.stdin.write('show engine innodb status\G\n') line = greppipe(self.stdout, 'Number of rows inserted') if line: l = line.split() self.set2['inserted'] = int(l[4].rstrip(',')) self.set2['updated'] = int(l[6].rstrip(',')) self.set2['deleted'] = int(l[8].rstrip(',')) self.set2['read'] = int(l[10]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except IOError as e: if op.debug > 1: print('%s: lost pipe to mysql, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception' % (self.filename, e)) for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_jvm_full.py000066400000000000000000000106531351755116500177270ustar00rootroot00000000000000# Author: Roberto Polli # # NOTE: Edit the jcmd location according to your path or use update-alternatives. global BIN_JCMD BIN_JCMD = '/usr/bin/jcmd' class dstat_plugin(dstat): """ This plugin gathers jvm stats via jcmd. Usage: JVM_PID=15123 dstat --jvm-full Minimize the impacts of jcmd and consider using: dstat --noupdate For full informations on jcmd see: - http://docs.oracle.com/javase/7/docs/technotes/tools/solaris/jcmd.html - https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr006.html This requires the presence of /tmp/hsperfdata_* directory, so it WON'T WORK if you add -XX:-UsePerfData or -XX:+PerfDisableSharedMem. """ def __init__(self): self.name = 'jvm_full' self.vars = ('clsL', 'clsU', 'fgc', 'heap', 'heap%', 'heapmax', 'perm', 'perm%', 'permmax') self.type = 'f' self.width = 5 self.scale = 1000 def check(self): """Preliminar checks. If no pid is passed, defaults to 0. """ if not os.access(BIN_JCMD, os.X_OK): raise Exception('Needs jstat binary') try: self.jvm_pid = int(os.environ.get('JVM_PID',0)) except Exception as e: self.jvm_pid = 0 return True @staticmethod def _to_stat(k, v): try: return k, int(v) except (KeyError, ValueError, AttributeError): return k, v @staticmethod def _cmd_splitlines(cmd): """Splits a txt output of lines like key=value. """ for l in os.popen(cmd): yield l.strip().split("=", 1) def extract(self): try: lines = self._cmd_splitlines( '%s %s PerfCounter.print ' % (BIN_JCMD, self.jvm_pid)) table = dict(self._to_stat(*l) for l in lines if len(l) > 1) if table: # Number of loaded classes. self.set2['clsL'] = table['java.cls.loadedClasses'] self.set2['clsU'] = table['java.cls.unloadedClasses'] # Number of Full Garbage Collection events. self.set2['fgc'] = table['sun.gc.collector.1.invocations'] # The heap space is made up of Old Generation and Young # Generation (which is divided in Eden, Survivor0 and # Survivor1) self.set2['heap'] = table['sun.gc.generation.1.capacity'] + table[ 'sun.gc.generation.0.capacity'] # Usage is hidden in the nested spaces. self.set2['heapu'] = sum(table[k] for k in table if 'sun.gc.generation.' in k and 'used' in k) self.set2['heapmax'] = table['sun.gc.generation.1.maxCapacity'] + table[ 'sun.gc.generation.0.maxCapacity'] # Use PermGen on jdk7 and the new metaspace on jdk8 try: self.set2['perm'] = table['sun.gc.generation.2.capacity'] self.set2['permu'] = sum(table[k] for k in table if 'sun.gc.generation.2.' in k and 'used' in k) self.set2['permmax'] = table[ 'sun.gc.generation.2.maxCapacity'] except KeyError: self.set2['perm'] = table['sun.gc.metaspace.capacity'] self.set2['permu'] = table['sun.gc.metaspace.used'] self.set2['permmax'] = table[ 'sun.gc.metaspace.maxCapacity'] # Evaluate statistics on memory usage. for name in ('heap', 'perm'): self.set2[name + '%'] = 100 * self.set2[ name + 'u'] / self.set2[name] for name in self.vars: self.val[name] = self.set2[name] if step == op.delay: self.set1.update(self.set2) except IOError as e: if op.debug > 1: print('%s: lost pipe to jstat, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception' % e) for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_jvm_vm.py000066400000000000000000000053151351755116500174060ustar00rootroot00000000000000# Author: Roberto Polli # # This plugin shows jvm stats using the JVM_PID environment variable. # Requires the presence of the /tmp/hsperfdata_* directory and # files created when running java with the profiler enabled. # class dstat_plugin(dstat): def __init__(self): self.name = 'jvm mem ops ' self.vars = ('fgc', 'heap', 'heap%', 'perm', 'perm%') self.type = 'f' self.width = 5 self.scale = 1000 def check(self): if not os.access('/usr/bin/jstat', os.X_OK): raise Exception('Needs jstat binary') try: self.jvm_pid = int(os.environ.get('JVM_PID', 0)) except Exception: self.jvm_pid = 0 return True @staticmethod def _to_float(s): return float(s.replace(",", ".")) @staticmethod def _cmd_splitlines(cmd): for l in os.popen(cmd): yield l.strip().split() def extract(self): from collections import namedtuple try: lines = self._cmd_splitlines( '/usr/bin/jstat -gc %s' % self.jvm_pid) headers = next(lines) DStatParser = namedtuple('DStatParser', headers) line = next(lines) if line: stats = DStatParser(*[self._to_float(x) for x in line]) # print(stats) self.set2['cls'] = 0 self.set2['fgc'] = int(stats.FGC) self.set2['heap'] = ( stats.S0C + stats.S1C + stats.EC + stats.OC) self.set2['heapu'] = ( stats.S0U + stats.S1U + stats.EU + stats.OU) # Use MetaSpace on jdk8 try: self.set2['perm'] = stats.PC self.set2['permu'] = stats.PU except AttributeError: self.set2['perm'] = stats.MC self.set2['permu'] = stats.MU # Evaluate statistics on memory usage. for name in ('heap', 'perm'): self.set2[name + '%'] = 100 * self.set2[ name + 'u'] / self.set2[name] self.set2[name] /= 1024 for name in self.vars: self.val[name] = self.set2[name] if step == op.delay: self.set1.update(self.set2) except IOError as e: if op.debug > 1: print('%s: lost pipe to jstat, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception' % e) for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_lustre.py000066400000000000000000000023051351755116500174220ustar00rootroot00000000000000# Author: Brock Palen , Kilian Vavalotti class dstat_plugin(dstat): def __init__(self): self.nick = ('read', 'write') self.cols = 2 def check(self): if not os.path.exists('/proc/fs/lustre/llite'): raise Exception('Lustre filesystem not found') info(1, 'Module %s is still experimental.' % self.filename) def name(self): return [mount for mount in os.listdir('/proc/fs/lustre/llite')] def vars(self): return [mount for mount in os.listdir('/proc/fs/lustre/llite')] def extract(self): for name in self.vars: for line in dopen(os.path.join('/proc/fs/lustre/llite', name, 'stats')).readlines(): l = line.split() if len(l) < 6: continue if l[0] == 'read_bytes': read = int(l[6]) elif l[0] == 'write_bytes': write = int(l[6]) self.set2[name] = (read, write) self.val[name] = list(map(lambda x, y: (y - x) * 1.0 / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4 dstat-0.7.4/plugins/dstat_md_status.py000066400000000000000000000024421351755116500201110ustar00rootroot00000000000000### Author: Bert de Bruijn class dstat_plugin(dstat): """ Recovery state of software RAID rebuild. Prints completed recovery percentage and rebuild speed of the md device that is actively being recovered or resynced. If no devices are being rebuilt, it displays 100%, 0B. If instead multiple devices are being rebuilt, it displays the total progress and total throughput. """ def __init__(self): self.name = 'sw raid' self.type = 's' self.scale = 0 self.nick = ('pct speed', ) self.width = 9 self.vars = ('text', ) self.open('/proc/mdstat') def check(self): if not os.path.exists('/proc/mdstat'): raise Exception('Needs kernel md support') def extract(self): pct = 0 speed = 0 nr = 0 for l in self.splitlines(): if len(l) < 2: continue if l[1] in ('recovery', 'reshape', 'resync'): nr += 1 pct += int(l[3][0:2].strip('.%')) speed += int(l[6].strip('sped=K/sc')) * 1024 if nr: pct = pct / nr else: pct = 100 self.val['text'] = '%s %s' % (cprint(pct, 'p', 3, 34), cprint(speed, 'd', 5, 1024)) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_memcache_hits.py000077500000000000000000000014121351755116500206760ustar00rootroot00000000000000### Author: Dean Wilson class dstat_plugin(dstat): """ Memcache hit count plugin. Displays the number of memcache get_hits and get_misses. """ def __init__(self): self.name = 'Memcache Hits' self.nick = ('Hit', 'Miss') self.vars = ('get_hits', 'get_misses') self.type = 'd' self.width = 6 self.scale = 50 def check(self): try: global memcache import memcache self.mc = memcache.Client(['127.0.0.1:11211'], debug=0) except: raise Exception('Plugin needs the memcache module') def extract(self): stats = self.mc.get_stats() for key in self.vars: self.val[key] = int(stats[0][1][key]) dstat-0.7.4/plugins/dstat_mongodb_conn.py000066400000000000000000000022321351755116500205450ustar00rootroot00000000000000### Author: global mongodb_user mongodb_user = os.getenv('DSTAT_MONGODB_USER') or os.getenv('USER') global mongodb_pwd mongodb_pwd = os.getenv('DSTAT_MONGODB_PWD') global mongodb_host mongodb_host = os.getenv('DSTAT_MONGODB_HOST') or '127.0.0.1:27017' class dstat_plugin(dstat): """ Plugin for MongoDB. """ def __init__(self): global pymongo import pymongo try: self.m = pymongo.MongoClient(mongodb_host) if mongodb_pwd: self.m.admin.authenticate(mongodb_user, mongodb_pwd) self.db = self.m.admin except Exception as e: raise Exception('Cannot interface with MongoDB server: %s' % e) self.name = 'mongodb con' self.nick = ('curr', 'avail') self.vars = ('connections.current', 'connections.available') self.type = 'd' self.width = 5 self.scale = 2 self.lastVal = {} def extract(self): status = self.db.command("serverStatus") for name in self.vars: self.val[name] = (int(self.getDoc(status, name))) def getDoc(self, dic, doc): par = doc.split('.') sdic = dic for p in par: sdic = sdic.get(p) return sdic dstat-0.7.4/plugins/dstat_mongodb_mem.py000066400000000000000000000034461351755116500203760ustar00rootroot00000000000000### Author: global mongodb_user mongodb_user = os.getenv('DSTAT_MONGODB_USER') or os.getenv('USER') global mongodb_pwd mongodb_pwd = os.getenv('DSTAT_MONGODB_PWD') global mongodb_host mongodb_host = os.getenv('DSTAT_MONGODB_HOST') or '127.0.0.1:27017' class dstat_plugin(dstat): """ Plugin for MongoDB. """ def __init__(self): global pymongo import pymongo try: self.m = pymongo.MongoClient(mongodb_host) if mongodb_pwd: self.m.admin.authenticate(mongodb_user, mongodb_pwd) self.db = self.m.admin except Exception as e: raise Exception('Cannot interface with MongoDB server: %s' % e) line = self.db.command("serverStatus") if 'storageEngine' in line: self.storageEngine = line.get('storageEngine').get('name') else: self.storageEngine = 'mmapv1' self.name = 'mongodb mem' self.nick = ('res', 'virt') self.vars = ('mem.resident', 'mem.virtual') self.type = 'd' self.width = 5 self.scale = 2 self.lastVal = {} if self.storageEngine == 'mmapv1': self.nick = self.nick + ('map', 'mapj', 'flt') self.vars = self.vars + ('mem.mapped', 'mem.mappedWithJournal', 'extra_info.page_faults') def extract(self): status = self.db.command("serverStatus") for name in self.vars: if name in ('extra_info.page_faults'): if not name in self.lastVal: self.lastVal[name] = int(self.getDoc(status, name)) self.val[name] = (int(self.getDoc(status, name)) - self.lastVal[name]) self.lastVal[name] = self.getDoc(status, name) else: self.val[name] = (int(self.getDoc(status, name))) def getDoc(self, dic, doc): par = doc.split('.') sdic = dic for p in par: sdic = sdic.get(p) return sdic dstat-0.7.4/plugins/dstat_mongodb_opcount.py000066400000000000000000000024301351755116500212770ustar00rootroot00000000000000### Author: global mongodb_user mongodb_user = os.getenv('DSTAT_MONGODB_USER') or os.getenv('USER') global mongodb_pwd mongodb_pwd = os.getenv('DSTAT_MONGODB_PWD') global mongodb_host mongodb_host = os.getenv('DSTAT_MONGODB_HOST') or '127.0.0.1:27017' class dstat_plugin(dstat): """ Plugin for MongoDB. """ def __init__(self): global pymongo import pymongo try: self.m = pymongo.MongoClient(mongodb_host) if mongodb_pwd: self.m.admin.authenticate(mongodb_user, mongodb_pwd) self.db = self.m.admin except Exception as e: raise Exception('Cannot interface with MongoDB server: %s' % e) self.name = 'mongodb counts' self.nick = ('qry', 'ins', 'upd', 'del', 'gtm', 'cmd') self.vars = ('query', 'insert','update','delete','getmore','command') self.type = 'd' self.width = 5 self.scale = 2 self.lastVal = {} def extract(self): status = self.db.command("serverStatus") opct = status['opcounters'] for name in self.vars: if name in opct.iterkeys(): if not name in self.lastVal: self.lastVal[name] = opct.get(name) self.val[name] = (int(opct.get(name)) - self.lastVal[name]) / elapsed self.lastVal[name] = opct.get(name) dstat-0.7.4/plugins/dstat_mongodb_queue.py000066400000000000000000000023101351755116500207310ustar00rootroot00000000000000### Author: global mongodb_user mongodb_user = os.getenv('DSTAT_MONGODB_USER') or os.getenv('USER') global mongodb_pwd mongodb_pwd = os.getenv('DSTAT_MONGODB_PWD') global mongodb_host mongodb_host = os.getenv('DSTAT_MONGODB_HOST') or '127.0.0.1:27017' class dstat_plugin(dstat): """ Plugin for MongoDB. """ def __init__(self): global pymongo import pymongo try: self.m = pymongo.MongoClient(mongodb_host) if mongodb_pwd: self.m.admin.authenticate(mongodb_user, mongodb_pwd) self.db = self.m.admin except Exception as e: raise Exception('Cannot interface with MongoDB server: %s' % e) self.name = 'mongodb queues' self.nick = ('ar', 'aw', 'qt', 'qw') self.vars = ('ar', 'aw', 'qt', 'qw') self.type = 'd' self.width = 5 self.scale = 2 self.lastVal = {} def extract(self): status = self.db.command("serverStatus") glock = status['globalLock'] alock = glock['activeClients'] qlock = glock['currentQueue'] self.val['ar'] = int(alock['readers']) self.val['aw'] = int(alock['writers']) self.val['qr'] = int(qlock['readers']) self.val['qw'] = int(qlock['writers']) dstat-0.7.4/plugins/dstat_mongodb_stats.py000066400000000000000000000036261351755116500207560ustar00rootroot00000000000000### Author: global mongodb_user mongodb_user = os.getenv('DSTAT_MONGODB_USER') or os.getenv('USER') global mongodb_pwd mongodb_pwd = os.getenv('DSTAT_MONGODB_PWD') global mongodb_host mongodb_host = os.getenv('DSTAT_MONGODB_HOST') or '127.0.0.1:27017' class dstat_plugin(dstat): """ Plugin for MongoDB. """ def __init__(self): global pymongo import pymongo try: self.m = pymongo.MongoClient(mongodb_host) if mongodb_pwd: self.m.admin.authenticate(mongodb_user, mongodb_pwd) self.db = self.m.admin except Exception as e: raise Exception('Cannot interface with MongoDB server: %s' % e) stats = self.db.command("listDatabases") self.dbList = [] for db in stats.get('databases'): self.dbList.append(db.get('name')) line = self.db.command("serverStatus") if 'storageEngine' in line: self.storageEngine = line.get('storageEngine').get('name') else: self.storageEngine = 'mmapv1' self.name = 'mongodb stats' self.nick = ('dsize', 'isize', 'ssize') self.vars = ('dataSize', 'indexSize', 'storageSize') self.type = 'b' self.width = 5 self.scale = 2 self.count = 1 if self.storageEngine == 'mmapv1': self.nick = self.nick + ('fsize',) self.vars = self.vars + ('fileSize',) def extract(self): self.set = {} # refresh the database list every 10 iterations if (self.count % 10) == 0: stats = self.m.admin.command("listDatabases") self.dbList = [] for db in stats.get('databases'): self.dbList.append(db.get('name')) self.count += 1 for name in self.vars: self.set[name] = 0 for db in self.dbList: self.db = self.m.get_database(db) stats = self.db.command("dbStats") for name in self.vars: self.set[name] += int(stats.get(name)) / (1024 * 1024) self.val = self.set dstat-0.7.4/plugins/dstat_mysql5_cmds.py000066400000000000000000000037701351755116500203530ustar00rootroot00000000000000### Author: global mysql_user mysql_user = os.getenv('DSTAT_MYSQL_USER') or os.getenv('USER') global mysql_pwd mysql_pwd = os.getenv('DSTAT_MYSQL_PWD') global mysql_host mysql_host = os.getenv('DSTAT_MYSQL_HOST') global mysql_port mysql_port = os.getenv('DSTAT_MYSQL_PORT') global mysql_socket mysql_socket = os.getenv('DSTAT_MYSQL_SOCKET') class dstat_plugin(dstat): """ Plugin for MySQL 5 commands. """ def __init__(self): self.name = 'mysql5 cmds' self.nick = ('sel', 'ins','upd','del') self.vars = ('Com_select', 'Com_insert','Com_update','Com_delete') self.type = 'd' self.width = 5 self.scale = 1 def check(self): global MySQLdb import MySQLdb try: args = {} if mysql_user: args['user'] = mysql_user if mysql_pwd: args['passwd'] = mysql_pwd if mysql_host: args['host'] = mysql_host if mysql_port: args['port'] = mysql_port if mysql_socket: args['unix_socket'] = mysql_socket self.db = MySQLdb.connect(**args) except Exception as e: raise Exception('Cannot interface with MySQL server: %s' % e) def extract(self): try: c = self.db.cursor() for name in self.vars: c.execute("""show global status like '%s';""" % name) line = c.fetchone() if line[0] in self.vars: if line[0] + 'raw' in self.set2: self.set2[line[0]] = int(line[1]) - self.set2[line[0] + 'raw'] self.set2[line[0] + 'raw'] = int(line[1]) for name in self.vars: self.val[name] = self.set2[name] * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except Exception as e: for name in self.vars: self.val[name] = -1 dstat-0.7.4/plugins/dstat_mysql5_conn.py000066400000000000000000000037661351755116500203670ustar00rootroot00000000000000### Author: global mysql_user mysql_user = os.getenv('DSTAT_MYSQL_USER') or os.getenv('USER') global mysql_pwd mysql_pwd = os.getenv('DSTAT_MYSQL_PWD') global mysql_host mysql_host = os.getenv('DSTAT_MYSQL_HOST') global mysql_port mysql_port = os.getenv('DSTAT_MYSQL_PORT') global mysql_socket mysql_socket = os.getenv('DSTAT_MYSQL_SOCKET') class dstat_plugin(dstat): """ Plugin for MySQL 5 connections. """ def __init__(self): self.name = 'mysql5 conn' self.nick = ('ThCon', '%Con') self.vars = ('Threads_connected', 'Threads') self.type = 'f' self.width = 4 self.scale = 1 def check(self): global MySQLdb import MySQLdb try: args = {} if mysql_user: args['user'] = mysql_user if mysql_pwd: args['passwd'] = mysql_pwd if mysql_host: args['host'] = mysql_host if mysql_port: args['port'] = mysql_port if mysql_socket: args['unix_socket'] = mysql_socket self.db = MySQLdb.connect(**args) except Exception as e: raise Exception('Cannot interface with MySQL server, %s' % e) def extract(self): try: c = self.db.cursor() c.execute("""show global variables like 'max_connections';""") max = c.fetchone() c.execute("""show global status like 'Threads_connected';""") thread = c.fetchone() if thread[0] in self.vars: self.set2[thread[0]] = float(thread[1]) self.set2['Threads'] = float(thread[1]) / float(max[1]) * 100.0 for name in self.vars: self.val[name] = self.set2[name] * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except Exception as e: for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_mysql5_innodb.py000066400000000000000000000100161351755116500206650ustar00rootroot00000000000000### Author: HIROSE Masaaki global mysql_options mysql_options = os.getenv('DSTAT_MYSQL') or '' global target_status global _basic_status global _extra_status _basic_status = ( ('Queries' , 'qps'), ('Com_select' , 'sel/s'), ('Com_insert' , 'ins/s'), ('Com_update' , 'upd/s'), ('Com_delete' , 'del/s'), ('Connections' , 'con/s'), ('Threads_connected' , 'thcon'), ('Threads_running' , 'thrun'), ('Slow_queries' , 'slow'), ) _extra_status = ( ('Innodb_rows_read' , 'r#read'), ('Innodb_rows_inserted' , 'r#ins'), ('Innodb_rows_updated' , 'r#upd'), ('Innodb_rows_deleted' , 'r#del'), ('Innodb_data_reads' , 'rdphy'), ('Innodb_buffer_pool_read_requests', 'rdlgc'), ('Innodb_data_writes' , 'wrdat'), ('Innodb_log_writes' , 'wrlog'), ('innodb_buffer_pool_pages_dirty_pct', '%dirty'), ) global calculating_status calculating_status = ( 'Innodb_buffer_pool_pages_total', 'Innodb_buffer_pool_pages_dirty', ) global gauge gauge = { 'Slow_queries' : 1, 'Threads_connected' : 1, 'Threads_running' : 1, } class dstat_plugin(dstat): """ mysql5-innodb, mysql5-innodb-basic, mysql5-innodb-extra display various metircs on MySQL5 and InnoDB. """ def __init__(self): self.name = 'MySQL5 InnoDB ' self.type = 'd' self.width = 5 self.scale = 1000 def check(self): if self.filename.find("basic") >= 0: target_status = _basic_status self.name += 'basic' elif self.filename.find("extra") >= 0: target_status = _extra_status self.name += 'extra' elif self.filename.find("full") >= 0: target_status = _basic_status + _extra_status self.name += 'full' else: target_status = _basic_status + _extra_status self.name += 'full' self.vars = tuple( map((lambda e: e[0]), target_status) ) self.nick = tuple( map((lambda e: e[1]), target_status) ) mysql_candidate = ('/usr/bin/mysql', '/usr/local/bin/mysql') mysql_cmd = '' for mc in mysql_candidate: if os.access(mc, os.X_OK): mysql_cmd = mc break if mysql_cmd: try: self.stdin, self.stdout, self.stderr = dpopen('%s -n %s' % (mysql_cmd, mysql_options)) except IOError: raise Exception('Cannot interface with MySQL binary') return True raise Exception('Needs MySQL binary') def extract(self): try: self.stdin.write('show global status;\n') for line in readpipe(self.stdout): if line == '': break s = line.split() if s[0] in self.vars: self.set2[s[0]] = float(s[1]) elif s[0] in calculating_status: self.set2[s[0]] = float(s[1]) for k in self.vars: if k in gauge: self.val[k] = self.set2[k] elif k == 'innodb_buffer_pool_pages_dirty_pct': self.val[k] = self.set2['Innodb_buffer_pool_pages_dirty'] / self.set2['Innodb_buffer_pool_pages_total'] * 100 else: self.val[k] = (self.set2[k] - self.set1[k]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except IOError as e: if op.debug > 1: print('%s: lost pipe to mysql, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception' % (self.filename, e)) for name in self.vars: self.val[name] = -1 dstat-0.7.4/plugins/dstat_mysql5_innodb_basic.py000077700000000000000000000000001351755116500263572dstat_mysql5_innodb.pyustar00rootroot00000000000000dstat-0.7.4/plugins/dstat_mysql5_innodb_extra.py000077700000000000000000000000001351755116500264212dstat_mysql5_innodb.pyustar00rootroot00000000000000dstat-0.7.4/plugins/dstat_mysql5_io.py000066400000000000000000000036561351755116500200370ustar00rootroot00000000000000### Author: global mysql_user mysql_user = os.getenv('DSTAT_MYSQL_USER') or os.getenv('USER') global mysql_pwd mysql_pwd = os.getenv('DSTAT_MYSQL_PWD') global mysql_host mysql_host = os.getenv('DSTAT_MYSQL_HOST') global mysql_port mysql_port = os.getenv('DSTAT_MYSQL_PORT') global mysql_socket mysql_socket = os.getenv('DSTAT_MYSQL_SOCKET') class dstat_plugin(dstat): """ Plugin for MySQL 5 I/O. """ def __init__(self): self.name = 'mysql5 io' self.nick = ('recv', 'sent') self.vars = ('Bytes_received', 'Bytes_sent') def check(self): global MySQLdb import MySQLdb try: args = {} if mysql_user: args['user'] = mysql_user if mysql_pwd: args['passwd'] = mysql_pwd if mysql_host: args['host'] = mysql_host if mysql_port: args['port'] = mysql_port if mysql_socket: args['unix_socket'] = mysql_socket self.db = MySQLdb.connect(**args) except: raise Exception('Cannot interface with MySQL server') def extract(self): try: c = self.db.cursor() c.execute("""show global status like 'Bytes_%';""") lines = c.fetchall() for line in lines: if len(line[1]) < 2: continue if line[0] in self.vars: if line[0] + 'raw' in self.set2: self.set2[line[0]] = float(line[1]) - self.set2[line[0] + 'raw'] self.set2[line[0] + 'raw'] = float(line[1]) for name in self.vars: self.val[name] = self.set2[name] * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except Exception as e: for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_mysql5_keys.py000066400000000000000000000036721351755116500204010ustar00rootroot00000000000000### Author: global mysql_user mysql_user = os.getenv('DSTAT_MYSQL_USER') or os.getenv('USER') global mysql_pwd mysql_pwd = os.getenv('DSTAT_MYSQL_PWD') global mysql_host mysql_host = os.getenv('DSTAT_MYSQL_HOST') global mysql_port mysql_port = os.getenv('DSTAT_MYSQL_PORT') global mysql_socket mysql_socket = os.getenv('DSTAT_MYSQL_SOCKET') class dstat_plugin(dstat): """ Plugin for MySQL 5 Keys. """ def __init__(self): self.name = 'mysql5 key status' self.nick = ('used', 'read', 'writ', 'rreq', 'wreq') self.vars = ('Key_blocks_used', 'Key_reads', 'Key_writes', 'Key_read_requests', 'Key_write_requests') self.type = 'f' self.width = 4 self.scale = 1000 def check(self): global MySQLdb import MySQLdb try: args = {} if mysql_user: args['user'] = mysql_user if mysql_pwd: args['passwd'] = mysql_pwd if mysql_host: args['host'] = mysql_host if mysql_port: args['port'] = mysql_port if mysql_socket: args['unix_socket'] = mysql_socket self.db = MySQLdb.connect(**args) except: raise Exception('Cannot interface with MySQL server') def extract(self): try: c = self.db.cursor() c.execute("""show global status like 'Key_%';""") lines = c.fetchall() for line in lines: if len(line[1]) < 2: continue if line[0] in self.vars: self.set2[line[0]] = float(line[1]) for name in self.vars: self.val[name] = self.set2[name] * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except Exception as e: for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_mysql_io.py000066400000000000000000000026121351755116500177410ustar00rootroot00000000000000global mysql_options mysql_options = os.getenv('DSTAT_MYSQL') class dstat_plugin(dstat): def __init__(self): self.name = 'mysql io' self.nick = ('recv', 'sent') self.vars = ('Bytes_received', 'Bytes_sent') def check(self): if not os.access('/usr/bin/mysql', os.X_OK): raise Exception('Needs MySQL binary') try: self.stdin, self.stdout, self.stderr = dpopen('/usr/bin/mysql -n %s' % mysql_options) except IOError: raise Exception('Cannot interface with MySQL binary') def extract(self): try: self.stdin.write("show status like 'Bytes_%';\n") for line in readpipe(self.stdout): l = line.split() if len(l) < 2: continue if l[0] in self.vars: self.set2[l[0]] = float(l[1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except IOError as e: if op.debug > 1: print('%s: lost pipe to mysql, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('dstat_innodb_buffer: exception', e) for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_mysql_keys.py000066400000000000000000000030511351755116500203030ustar00rootroot00000000000000global mysql_options mysql_options = os.getenv('DSTAT_MYSQL') class dstat_plugin(dstat): def __init__(self): self.name = 'mysql key status' self.nick = ('used', 'read', 'writ', 'rreq', 'wreq') self.vars = ('Key_blocks_used', 'Key_reads', 'Key_writes', 'Key_read_requests', 'Key_write_requests') self.type = 'f' self.width = 4 self.scale = 1000 def check(self): if not os.access('/usr/bin/mysql', os.X_OK): raise Exception('Needs MySQL binary') try: self.stdin, self.stdout, self.stderr = dpopen('/usr/bin/mysql -n %s' % mysql_options) except IOError: raise Exception('Cannot interface with MySQL binary') def extract(self): try: self.stdin.write("show status like 'Key_%';\n") for line in readpipe(self.stdout): l = line.split() if len(l) < 2: continue if l[0] in self.vars: self.set2[l[0]] = float(l[1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) except IOError as e: if op.debug > 1: print('%s: lost pipe to mysql, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception' (self.filename, e)) for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_net_packets.py000066400000000000000000000040461351755116500204100ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Number of packets received and send per interface. """ def __init__(self): self.nick = ('#recv', '#send') self.type = 'd' self.width = 5 self.scale = 1000 self.totalfilter = re.compile('^(lo|bond\d+|face|.+\.\d+)$') self.open('/proc/net/dev') self.cols = 2 def discover(self, *objlist): ret = [] for l in self.splitlines(replace=':'): if len(l) < 17: continue if l[2] == '0' and l[10] == '0': continue name = l[0] if name not in ('lo', 'face'): ret.append(name) ret.sort() for item in objlist: ret.append(item) return ret def vars(self): ret = [] if op.netlist: varlist = op.netlist elif not op.full: varlist = ('total',) else: varlist = self.discover # if len(varlist) > 2: varlist = varlist[0:2] varlist.sort() for name in varlist: if name in self.discover + ['total', 'lo']: ret.append(name) if not ret: raise Exception('No suitable network interfaces found to monitor') return ret def name(self): return ['pkt/'+name for name in self.vars] def extract(self): self.set2['total'] = [0, 0] for l in self.splitlines(replace=':'): if len(l) < 17: continue if l[2] == '0' and l[10] == '0': continue name = l[0] if name in self.vars : self.set2[name] = ( int(l[2]), int(l[10]) ) if not self.totalfilter.match(name): self.set2['total'] = ( self.set2['total'][0] + int(l[2]), self.set2['total'][1] + int(l[10])) if update: for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) * 1.0 / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_nfs3.py000066400000000000000000000022071351755116500167560ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'nfs3 client' self.nick = ('read', 'writ', 'rdir', 'othr', 'fs', 'cmmt') self.vars = ('read', 'write', 'readdir', 'other', 'filesystem', 'commit') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/net/rpc/nfs') def extract(self): for l in self.splitlines(): if not l or l[0] != 'proc3': continue self.set2['read'] = int(l[8]) self.set2['write'] = int(l[9]) self.set2['readdir'] = int(l[18]) + int(l[19]) self.set2['other'] = int(l[3]) + int(l[4]) + int(l[5]) + int(l[6]) + int(l[7]) + int(l[10]) + int(l[11]) + int(l[12]) + int(l[13]) + int(l[14]) + int(l[15]) + int(l[16]) + int(l[17]) self.set2['filesystem'] = int(l[20]) + int(l[21]) + int(l[22]) self.set2['commit'] = int(l[23]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_nfs3_ops.py000066400000000000000000000022521351755116500176370ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'extended nfs3 client operations' self.nick = ('null', 'gatr', 'satr', 'look', 'aces', 'rdln', 'read', 'writ', 'crea', 'mkdr', 'syml', 'mknd', 'rm', 'rmdr', 'ren', 'link', 'rdir', 'rdr+', 'fstt', 'fsnf', 'path', 'cmmt') self.vars = ('null', 'getattr', 'setattr', 'lookup', 'access', 'readlink', 'read', 'write', 'create', 'mkdir', 'symlink', 'mknod', 'remove', 'rmdir', 'rename', 'link', 'readdir', 'readdirplus', 'fsstat', 'fsinfo', 'pathconf', 'commit') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/net/rpc/nfs') def check(self): info(1, 'Module %s is still experimental.' % self.filename) def extract(self): for l in self.splitlines(): if not l or l[0] != 'proc3': continue for i, name in enumerate(self.vars): self.set2[name] = int(l[i+2]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_nfsd3.py000066400000000000000000000023421351755116500171220ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'nfs3 server' self.nick = ('read', 'writ', 'rdir', 'inod', 'fs', 'cmmt') self.vars = ('read', 'write', 'readdir', 'inode', 'filesystem', 'commit') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/net/rpc/nfsd') def check(self): info(1, 'Module %s is still experimental.' % self.filename) def extract(self): for l in self.splitlines(): if not l or l[0] != 'proc3': continue self.set2['read'] = int(l[8]) self.set2['write'] = int(l[9]) self.set2['readdir'] = int(l[18]) + int(l[19]) self.set2['inode'] = int(l[3]) + int(l[4]) + int(l[5]) + int(l[6]) + int(l[7]) + int(l[10]) + int(l[11]) + int(l[12]) + int(l[13]) + int(l[14]) + int(l[15]) + int(l[16]) + int(l[17]) self.set2['filesystem'] = int(l[20]) + int(l[21]) + int(l[22]) self.set2['commit'] = int(l[23]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_nfsd3_ops.py000066400000000000000000000022531351755116500200040ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'extended nfs3 server operations' self.nick = ('null', 'gatr', 'satr', 'look', 'aces', 'rdln', 'read', 'writ', 'crea', 'mkdr', 'syml', 'mknd', 'rm', 'rmdr', 'ren', 'link', 'rdir', 'rdr+', 'fstt', 'fsnf', 'path', 'cmmt') self.vars = ('null', 'getattr', 'setattr', 'lookup', 'access', 'readlink', 'read', 'write', 'create', 'mkdir', 'symlink', 'mknod', 'remove', 'rmdir', 'rename', 'link', 'readdir', 'readdirplus', 'fsstat', 'fsinfo', 'pathconf', 'commit') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/net/rpc/nfsd') def check(self): info(1, 'Module %s is still experimental.' % self.filename) def extract(self): for l in self.splitlines(): if not l or l[0] != 'proc3': continue for i, name in enumerate(self.vars): self.set2[name] = int(l[i+2]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_nfsd4_ops.py000066400000000000000000000105011351755116500200000ustar00rootroot00000000000000### Author: Adam Michel ### Based on work by: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'nfs4 server' # this vars/nick pair is the ones I considered relevant. Any set of the full list would work. self.vars = ('read','write','readdir','getattr','setattr','commit','getfh','putfh', 'savefh','restorefh','open','open_conf','close','access','lookup','remove') self.nick = ('read', 'writ', 'rdir', 'gatr','satr','cmmt','gfh','pfh','sfh','rfh', 'open','opnc','clse','accs','lkup','rem') # this is every possible variable for NFSv4 server if you're into that #self.vars4 = ('op0-unused', 'op1-unused', 'op2-future' , 'access', # 'close', 'commit', 'create', 'delegpurge', 'delegreturn', 'getattr', 'getfh', # 'link', 'lock', 'lockt', 'locku', 'lookup', 'lookup_root', 'nverify', 'open', # 'openattr', 'open_conf', 'open_dgrd','putfh', 'putpubfh', 'putrootfh', # 'read', 'readdir', 'readlink', 'remove', 'rename','renew', 'restorefh', # 'savefh', 'secinfo', 'setattr', 'setcltid', 'setcltidconf', 'verify', 'write', # 'rellockowner') # I separated the NFSv41 ops cause you know, completeness. #self.vars41 = ('bc_ctl', 'bind_conn', 'exchange_id', 'create_ses', # 'destroy_ses', 'free_stateid', 'getdirdeleg', 'getdevinfo', 'getdevlist', # 'layoutcommit', 'layoutget', 'layoutreturn', 'secinfononam', 'sequence', # 'set_ssv', 'test_stateid', 'want_deleg', 'destroy_clid', 'reclaim_comp') # Just catin' the tuples together to make the full list. #self.vars = self.vars4 + self.vars41 # these are terrible shortnames for every possible variable #self.nick4 = ('unsd','unsd','unsd','accs','clse','comm','crt','delp','delr','gatr','gfh', # 'link','lock','lckt','lcku','lkup','lkpr','nver','open','opna','opnc','opnd', # 'pfh','ppfh','prfh','read','rdir','rlnk','rmv','ren','rnw','rfh','sfh','snfo', # 'satr','scid','scic','ver','wrt','rlko') #self.nick41 = ('bctl','bcon','eid','cses','dses','fsid', # 'gdd','gdi','gdl','lcmt','lget','lrtn','sinn','seq','sets','tsts','wdel','dcid', # 'rcmp') #self.nick = self.nick4 + self.nick41 self.type = 'd' self.width = 5 self.scale = 1000 self.open("/proc/net/rpc/nfsd") def check(self): # other NFS modules had this, so I left it. It seems to work. info(1, 'Module %s is still experimental.' % self.filename) def extract(self): # list of fields from /proc/net/rpc/nfsd, in order of output # taken from include/linux/nfs4.h in kernel source nfsd4_names = ('label', 'fieldcount', 'op0-unused', 'op1-unused', 'op2-future' , 'access', 'close', 'commit', 'create', 'delegpurge', 'delegreturn', 'getattr', 'getfh', 'link', 'lock', 'lockt', 'locku', 'lookup', 'lookup_root', 'nverify', 'open', 'openattr', 'open_conf', 'open_dgrd','putfh', 'putpubfh', 'putrootfh', 'read', 'readdir', 'readlink', 'remove', 'rename','renew', 'restorefh', 'savefh', 'secinfo', 'setattr', 'setcltid', 'setcltidconf', 'verify', 'write', 'rellockowner', 'bc_ctl', 'bind_conn', 'exchange_id', 'create_ses', 'destroy_ses', 'free_stateid', 'getdirdeleg', 'getdevinfo', 'getdevlist', 'layoutcommit', 'layoutget', 'layoutreturn', 'secinfononam', 'sequence', 'set_ssv', 'test_stateid', 'want_deleg', 'destroy_clid', 'reclaim_comp' ) for line in self.splitlines(): fields = line.split() if fields[0] == "proc4ops": # just grab NFSv4 stats assert int(fields[1]) == len(fields[2:]), ("reported field count (%d) does not match actual field count (%d)" % (int(fields[1]), len(fields[2:]))) for var in self.vars: self.set2[var] = fields[nfsd4_names.index(var)] for name in self.vars: self.val[name] = (int(self.set2[name]) - int(self.set1[name])) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_nfsstat4.py000066400000000000000000000056321351755116500176600ustar00rootroot00000000000000### Author: Adam Michel ### Based on work by: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'nfs4 client' # this vars/nick pair is the ones I considered relevant. Any set of the full list would work. self.vars = ('read', 'write', 'readdir', 'commit', 'getattr', 'create', 'link','remove') self.nick = ('read', 'writ', 'rdir', 'cmmt', 'gatr','crt','link','rmv') # this is every possible variable if you're into that #self.vars = ("read", "write", "commit", "open", "open_conf", "open_noat", "open_dgrd", "close", # "setattr", "fsinfo", "renew", "setclntid", "confirm", "lock", "lockt", "locku", # "access", "getattr", "lookup", "lookup_root", "remove", "rename", "link", "symlink", # "create", "pathconf", "statfs", "readlink", "readdir", "server_caps", "delegreturn", # "getacl", "setacl", "fs_locations", "rel_lkowner", "secinfo") # these are terrible shortnames for every possible variable #self.nick = ("read", "writ", "comt", "open", "opnc", "opnn", "opnd", "clse", "seta", "fnfo", # "renw", "stcd", "cnfm", "lock", "lckt", "lcku", "accs", "gatr", "lkup", "lkp_r", # "rem", "ren", "lnk", "slnk", "crte", "pthc", "stfs", "rdlk", "rdir", "scps", "delr", # "gacl", "sacl", "fslo", "relo", "seco") self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/net/rpc/nfs') def check(self): # other NFS modules had this, so I left it. It seems to work. info(1, 'Module %s is still experimental.' % self.filename) def extract(self): # list of fields from nfsstat, in order of output from cat /proc/net/rpc/nfs nfs4_names = ("version", "fieldcount", "null", "read", "write", "commit", "open", "open_conf", "open_noat", "open_dgrd", "close", "setattr", "fsinfo", "renew", "setclntid", "confirm", "lock", "lockt", "locku", "access", "getattr", "lookup", "lookup_root", "remove", "rename", "link", "symlink", "create", "pathconf", "statfs", "readlink", "readdir", "server_caps", "delegreturn", "getacl", "setacl", "fs_locations", "rel_lkowner", "secinfo") for line in self.splitlines(): fields = line.split() if fields[0] == "proc4": # just grab NFSv4 stats assert int(fields[1]) == len(fields[2:]), ("reported field count (%d) does not match actual field count (%d)" % (int(fields[1]), len(fields[2:]))) for var in self.vars: self.set2[var] = fields[nfs4_names.index(var)] for name in self.vars: self.val[name] = (int(self.set2[name]) - int(self.set1[name])) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_ntp.py000066400000000000000000000036341351755116500167130ustar00rootroot00000000000000### Author: Dag Wieers global socket import socket global struct import struct ### FIXME: Implement millisecond granularity as well ### FIXME: Interrupts socket if data is overdue (more than 250ms ?) class dstat_plugin(dstat): """ Time from an NTP server. BEWARE: this dstat plugin typically takes a lot longer to run than system plugins and for that reason it is important to use an NTP server located nearby as well as make sure that it does not impact your other counters too much. """ def __init__(self): self.name = 'ntp' self.nick = ('date/time',) self.vars = ('time',) self.timefmt = os.getenv('DSTAT_TIMEFMT') or '%d-%m %H:%M:%S' self.ntpserver = os.getenv('DSTAT_NTPSERVER') or '0.fedora.pool.ntp.org' self.type = 's' self.width = len(time.strftime(self.timefmt, time.localtime())) self.scale = 0 self.epoch = 2208988800 # socket.setdefaulttimeout(0.25) self.socket = socket.socket( socket.AF_INET, socket.SOCK_DGRAM ) self.socket.settimeout(0.25) def gettime(self): self.socket.sendto( '\x1b' + 47 * '\0', ( self.ntpserver, 123 )) data, address = self.socket.recvfrom(1024) return struct.unpack( '!12I', data )[10] - self.epoch def check(self): try: self.gettime() except socket.gaierror: raise Exception('Failed to connect to NTP server %s.' % self.ntpserver) except socket.error: raise Exception('Error connecting to NTP server %s.' % self.ntpserver) def extract(self): try: self.val['time'] = time.strftime(self.timefmt, time.localtime(self.gettime())) except: self.val['time'] = theme['error'] + '-'.rjust(self.width-1) + ' ' def showcsv(self): return time.strftime(self.timefmt, time.localtime(self.gettime())) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_postfix.py000077500000000000000000000011651351755116500176060ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'postfix' self.nick = ('inco', 'actv', 'dfrd', 'bnce', 'defr') self.vars = ('incoming', 'active', 'deferred', 'bounce', 'defer') self.type = 'd' self.width = 4 self.scale = 100 def check(self): if not os.access('/var/spool/postfix/active', os.R_OK): raise Exception('Cannot access postfix queues') def extract(self): for item in self.vars: self.val[item] = len(glob.glob('/var/spool/postfix/'+item+'/*/*')) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_power.py000066400000000000000000000042301351755116500172370ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Power usage information from ACPI. Displays the power usage in watt per hour of your system's battery using ACPI information. This information is only available when the battery is being used (or being charged). """ def __init__(self): self.name = 'power' self.nick = ( 'usage', ) self.vars = ( 'rate', ) self.type = 'f' self.width = 5 self.scale = 1 self.rate = 0 self.batteries = [] for battery in os.listdir('/proc/acpi/battery/'): for line in dopen('/proc/acpi/battery/'+battery+'/state').readlines(): l = line.split() if len(l) < 2: continue self.batteries.append(battery) break def check(self): if not self.batteries: raise Exception('No battery information found, no power usage statistics') def extract(self): amperes_drawn = 0 voltage = 0 watts_drawn = 0 for battery in self.batteries: for line in dopen('/proc/acpi/battery/'+battery+'/state').readlines(): l = line.split() if len(l) < 3: continue if l[0] == 'present:' and l[1] != 'yes': continue if l[0:2] == ['charging','state:'] and l[2] != 'discharging': voltage = 0 break if l[0:2] == ['present','voltage:']: voltage = int(l[2]) / 1000.0 elif l[0:2] == ['present','rate:'] and l[3] == 'mW': watts_drawn = int(l[2]) / 1000.0 elif l[0:2] == ['present','rate:'] and l[3] == 'mA': amperes_drawn = int(l[2]) / 1000.0 self.rate = self.rate + watts_drawn + voltage * amperes_drawn ### Return error if we found no information if self.rate == 0: self.rate = -1 if op.update: self.val['rate'] = self.rate / elapsed else: self.val['rate'] = self.rate if step == op.delay: self.rate = 0 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_proc_count.py000077500000000000000000000005651351755116500202700ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Total Number of processes on this system. """ def __init__(self): self.name = 'procs' self.vars = ('total',) self.type = 'd' self.width = 4 self.scale = 10 def extract(self): self.val['total'] = len([pid for pid in proc_pidlist()]) dstat-0.7.4/plugins/dstat_qmail.py000066400000000000000000000012201351755116500172020ustar00rootroot00000000000000### Author: Tom Van Looy class dstat_plugin(dstat): """ port of qmail_qstat to dstat """ def __init__(self): self.name = 'qmail' self.nick = ('in_queue', 'not_prep') self.vars = ('mess', 'todo') self.type = 'd' self.width = 4 self.scale = 100 def check(self): for item in self.vars: if not os.access('/var/qmail/queue/'+item, os.R_OK): raise Exception('Cannot access qmail queues') def extract(self): for item in self.vars: self.val[item] = len(glob.glob('/var/qmail/queue/'+item+'/*/*')) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_redis.py000066400000000000000000000025521351755116500172160ustar00rootroot00000000000000### Author: Jihyun Yu global redis_host redis_host = os.getenv('DSTAT_REDIS_HOST') or "127.0.0.1" global redis_port redis_port = os.getenv('DSTAT_REDIS_PORT') or "6379" class dstat_plugin(dstat): def __init__(self): self.type = 'd' self.width = 7 self.scale = 10000 self.name = 'redis' self.nick = ('tps',) self.vars = ('tps',) self.cmdInfo = '*1\r\n$4\r\ninfo\r\n' def get_info(self): global socket import socket global redis_host global redis_port s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) try: s.settimeout(0.1) s.connect((redis_host, int(redis_port))) s.send(self.cmdInfo) dict = {}; for line in s.recv(1024*1024).split('\r\n'): if line == "" or line[0] == '#' or line[0] == '*' or line[0] == '$': continue pair = line.split(':', 2) dict[pair[0]] = pair[1] return dict except: return {} finally: try: s.close() except: pass def extract(self): key = "instantaneous_ops_per_sec" dic = self.get_info() if key in dic: self.val['tps'] = int(dic[key]) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_rpc.py000066400000000000000000000013561351755116500166750ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'rpc client' self.nick = ('call', 'retr', 'refr') self.vars = ('calls', 'retransmits', 'autorefreshes') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/net/rpc/nfs') def extract(self): for l in self.splitlines(): if not l or l[0] != 'rpc': continue for i, name in enumerate(self.vars): self.set2[name] = int(l[i+1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_rpcd.py000066400000000000000000000014141351755116500170340ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'rpc server' self.nick = ('call', 'erca', 'erau', 'ercl', 'xdrc') self.vars = ('calls', 'badcalls', 'badauth', 'badclnt', 'xdrcall') self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/net/rpc/nfsd') def extract(self): for l in self.splitlines(): if not l or l[0] != 'rpc': continue for i, name in enumerate(self.vars): self.set2[name] = int(l[i+1]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_sendmail.py000077500000000000000000000010601351755116500177000ustar00rootroot00000000000000### Author: Dag Wieers ### FIXME: Should read /var/log/mail/statistics or /etc/mail/statistics (format ?) class dstat_plugin(dstat): def __init__(self): self.name = 'sendmail' self.vars = ('queue',) self.type = 'd' self.width = 4 self.scale = 100 def check(self): if not os.access('/var/spool/mqueue', os.R_OK): raise Exception('Cannot access sendmail queue') def extract(self): self.val['queue'] = len(glob.glob('/var/spool/mqueue/qf*')) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_snmp_cpu.py000066400000000000000000000031431351755116500177310ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'total cpu' self.vars = ( 'usr', 'sys', 'idl' ) self.type = 'p' self.width = 3 self.scale = 34 self.server = os.getenv('DSTAT_SNMPSERVER') or '192.168.1.1' self.community = os.getenv('DSTAT_SNMPCOMMUNITY') or 'public' def check(self): try: global cmdgen from pysnmp.entity.rfc3413.oneliner import cmdgen except: raise Exception('Needs pysnmp and pyasn1 modules') def extract(self): self.set2['usr'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,11,50,0))) self.set2['sys'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,11,52,0))) self.set2['idl'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,11,53,0))) # self.set2['usr'] = int(snmpget(self.server, self.community, (('UCD-SNMP-MIB', 'ssCpuRawUser'), 0))) # self.set2['sys'] = int(snmpget(self.server, self.community, (('UCD-SNMP-MIB', 'ssCpuRawSystem'), 0))) # self.set2['idl'] = int(snmpget(self.server, self.community, (('UCD-SNMP-MIB', 'ssCpuRawIdle'), 0))) if update: for name in self.vars: if sum(self.set2.values()) > sum(self.set1.values()): self.val[name] = 100.0 * (self.set2[name] - self.set1[name]) / (sum(self.set2.values()) - sum(self.set1.values())) else: self.val[name] = 0 if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_snmp_load.py000066400000000000000000000014511351755116500200610ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'load avg' self.nick = ('1m', '5m', '15m') self.vars = ('load1', 'load5', 'load15') self.type = 'f' self.width = 4 self.scale = 0.5 self.server = os.getenv('DSTAT_SNMPSERVER') or '192.168.1.1' self.community = os.getenv('DSTAT_SNMPCOMMUNITY') or 'public' def check(self): try: global cmdgen from pysnmp.entity.rfc3413.oneliner import cmdgen except: raise Exception('Needs pysnmp and pyasn1 modules') def extract(self): list(map(lambda x, y: self.val.update({x: float(y)}), self.vars, snmpwalk(self.server, self.community, (1,3,6,1,4,1,2021,10,1,3)))) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_snmp_mem.py000066400000000000000000000023201351755116500177140ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'memory usage' self.nick = ('used', 'buff', 'cach', 'free') self.vars = ('MemUsed', 'Buffers', 'Cached', 'MemFree') self.server = os.getenv('DSTAT_SNMPSERVER') or '192.168.1.1' self.community = os.getenv('DSTAT_SNMPCOMMUNITY') or 'public' def check(self): try: global cmdgen from pysnmp.entity.rfc3413.oneliner import cmdgen except: raise Exception('Needs pysnmp and pyasn1 modules') def extract(self): self.val['MemTotal'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,4,5,0))) * 1024 self.val['MemFree'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,4,11,0))) * 1024 # self.val['Shared'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,4,13,0))) * 1024 self.val['Buffers'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,4,14,0))) * 1024 self.val['Cached'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,4,15,0))) * 1024 self.val['MemUsed'] = self.val['MemTotal'] - self.val['MemFree'] # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_snmp_net.py000066400000000000000000000022271351755116500177320ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.nick = ('recv', 'send') self.type = 'b' self.cols = 2 self.server = os.getenv('DSTAT_SNMPSERVER') or '192.168.1.1' self.community = os.getenv('DSTAT_SNMPCOMMUNITY') or 'public' def check(self): try: global cmdgen from pysnmp.entity.rfc3413.oneliner import cmdgen except: raise Exception('Needs pysnmp and pyasn1 modules') def name(self): return self.vars def vars(self): return [ str(x) for x in snmpwalk(self.server, self.community, (1,3,6,1,2,1,2,2,1,2)) ] def extract(self): list(map(lambda x, y, z: self.set2.update({x: (int(y), int(z))}), self.vars, snmpwalk(self.server, self.community, (1,3,6,1,2,1,2,2,1,10)), snmpwalk(self.server, self.community, (1,3,6,1,2,1,2,2,1,16)))) if update: for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) * 1.0 / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_snmp_net_err.py000066400000000000000000000022641351755116500206030ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.nick = ('error', ) self.type = 'b' self.cols = 1 self.server = os.getenv('DSTAT_SNMPSERVER') or '192.168.1.1' self.community = os.getenv('DSTAT_SNMPCOMMUNITY') or 'public' def check(self): try: global cmdgen from pysnmp.entity.rfc3413.oneliner import cmdgen except: raise Exception('Needs pysnmp and pyasn1 modules') def name(self): return self.vars def vars(self): return [ str(x) for x in snmpwalk(self.server, self.community, (1,3,6,1,2,1,2,2,1,2)) ] def extract(self): list(map(lambda x, y: self.set2.update({x: (int(y), )}), self.vars, snmpwalk(self.server, self.community, (1,3,6,1,2,1,2,2,1,20)))) if update: for name in self.set2: # self.val[name] = list(map(lambda x, y: (y - x) * 1.0 / elapsed, self.set1[name], self.set2[name])) self.val[name] = list(map(lambda x, y: (y - x) * 1.0, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_snmp_sys.py000066400000000000000000000020411351755116500177540ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'system' self.nick = ('int', 'csw') self.vars = ('intr', 'ctxt') self.type = 'd' self.width = 5 self.scale = 1000 self.server = os.getenv('DSTAT_SNMPSERVER') or '192.168.1.1' self.community = os.getenv('DSTAT_SNMPCOMMUNITY') or 'public' def check(self): try: global cmdgen from pysnmp.entity.rfc3413.oneliner import cmdgen except: raise Exception('Needs pysnmp and pyasn1 modules') def extract(self): self.set2['intr'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,11,59,0))) self.set2['ctxt'] = int(snmpget(self.server, self.community, (1,3,6,1,4,1,2021,11,60,0))) if update: for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_snooze.py000066400000000000000000000016141351755116500174230ustar00rootroot00000000000000class dstat_plugin(dstat): def __init__(self): self.name = 'snooze' self.vars = ('snooze',) self.type = 's' self.width = 6 self.scale = 0 self.before = time.time() def extract(self): now = time.time() if loop != 0: self.val['snooze'] = now - self.before else: self.val['snooze'] = self.before if step == op.delay: self.before = now def show(self): if self.val['snooze'] > step + 1: return ansi['default'] + ' -' if op.blackonwhite: textcolor = 'black' if step != op.delay: textcolor = 'darkgray' else: textcolor = 'white' if step != op.delay: textcolor = 'gray' snoze, c = fchg(self.val['snooze'], 6, 1000) return color[textcolor] + snoze dstat-0.7.4/plugins/dstat_squid.py000066400000000000000000000032021351755116500172260ustar00rootroot00000000000000### Authority: Jason Friedland # This plugin has been tested with: # - Dstat 0.6.7 # - CentOS release 5.4 (Final) # - Python 2.4.3 # - Squid 2.6 and 2.7 global squidclient_options squidclient_options = os.getenv('DSTAT_SQUID_OPTS') # -p 8080 class dstat_plugin(dstat): ''' Provides various Squid statistics. ''' def __init__(self): self.name = 'squid status' self.type = 's' self.width = 5 self.scale = 1000 self.vars = ('Number of file desc currently in use', 'CPU Usage, 5 minute avg', 'Total accounted', 'Number of clients accessing cache', 'Mean Object Size') self.nick = ('fdesc', 'cpu5', 'mem', 'clnts', 'objsz') def check(self): if not os.access('/usr/sbin/squidclient', os.X_OK): raise Exception('Needs squidclient binary') cmd_test('/usr/sbin/squidclient %s mgr:info' % squidclient_options) return True def extract(self): try: for l in cmd_splitlines('/usr/sbin/squidclient %s mgr:info' % squidclient_options, ':'): if l[0].strip() in self.vars: self.val[l[0].strip()] = l[1].strip() break except IOError as e: if op.debug > 1: print('%s: lost pipe to squidclient, %s' % (self.filename, e)) for name in self.vars: self.val[name] = -1 except Exception as e: if op.debug > 1: print('%s: exception' (self.filename, e)) for name in self.vars: self.val[name] = -1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_test.py000066400000000000000000000010431351755116500170610ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): ''' Provides a test playground to test syntax and structure. ''' def __init__(self): self.name = 'test' self.nick = ( 'f1', 'f2' ) self.vars = ( 'f1', 'f2' ) # self.type = 'd' # self.width = 4 # self.scale = 20 self.type = 's' self.width = 4 self.scale = 0 def extract(self): # Self.val = { 'f1': -1, 'f2': -1 } self.val = { 'f1': 'test', 'f2': 'test' } # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_thermal.py000066400000000000000000000066711351755116500175520ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'thermal' self.type = 'd' self.width = 3 self.scale = 20 if os.path.exists('/sys/devices/virtual/thermal/'): self.nick = [] self.vars = [] for zone in os.listdir('/sys/devices/virtual/thermal/'): zone_split=zone.split("thermal_zone") if len(zone_split) == 2: self.vars.append(zone) name="".join(["tz",zone_split[1]]) self.nick.append(name) elif os.path.exists('/sys/bus/acpi/devices/LNXTHERM:01/thermal_zone/'): self.vars = os.listdir('/sys/bus/acpi/devices/LNXTHERM:01/thermal_zone/') self.nick = [] for name in self.vars: self.nick.append(name.lower()) elif os.path.exists('/proc/acpi/ibm/thermal'): self.namelist = ['cpu', 'pci', 'hdd', 'cpu', 'ba0', 'unk', 'ba1', 'unk'] self.nick = [] for line in dopen('/proc/acpi/ibm/thermal'): l = line.split() for i, name in enumerate(self.namelist): if int(l[i+1]) > 0: self.nick.append(name) self.vars = self.nick elif os.path.exists('/proc/acpi/thermal_zone/'): self.vars = os.listdir('/proc/acpi/thermal_zone/') # self.nick = [name.lower() for name in self.vars] self.nick = [] for name in self.vars: self.nick.append(name.lower()) else: raise Exception('Needs kernel thermal, ACPI or IBM-ACPI support') def check(self): if not os.path.exists('/proc/acpi/ibm/thermal') and \ not os.path.exists('/proc/acpi/thermal_zone/') and \ not os.path.exists('/sys/devices/virtual/thermal/') and \ not os.path.exists('/sys/bus/acpi/devices/LNXTHERM:00/thermal_zone/'): raise Exception('Needs kernel thermal, ACPI or IBM-ACPI support') def extract(self): if os.path.exists('/sys/devices/virtual/thermal/'): for zone in self.vars: for line in dopen('/sys/devices/virtual/thermal/'+zone+'/temp').readlines(): l = line.split() self.val[zone] = int(l[0]) elif os.path.exists('/sys/bus/acpi/devices/LNXTHERM:01/thermal_zone/'): for zone in self.vars: if os.path.isdir('/sys/bus/acpi/devices/LNXTHERM:01/thermal_zone/'+zone) == False: for line in dopen('/sys/bus/acpi/devices/LNXTHERM:01/thermal_zone/'+zone).readlines(): l = line.split() if l[0].isdigit() == True: self.val[zone] = int(l[0]) else: self.val[zone] = 0 elif os.path.exists('/proc/acpi/ibm/thermal'): for line in dopen('/proc/acpi/ibm/thermal'): l = line.split() for i, name in enumerate(self.namelist): if int(l[i+1]) > 0: self.val[name] = int(l[i+1]) elif os.path.exists('/proc/acpi/thermal_zone/'): for zone in self.vars: for line in dopen('/proc/acpi/thermal_zone/'+zone+'/temperature').readlines(): l = line.split() self.val[zone] = int(l[1]) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_bio.py000066400000000000000000000053261351755116500175450ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Top most expensive block I/O process. Displays the name of the most expensive block I/O process. """ def __init__(self): self.name = 'most expensive' self.vars = ('block i/o process',) self.type = 's' self.width = 22 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/io', os.R_OK): raise Exception('Kernel has no per-process I/O accounting [CONFIG_TASK_IO_ACCOUNTING], use at least 2.6.20') def extract(self): self.output = '' self.pidset2 = {} self.val['usage'] = 0.0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset2: self.pidset2[pid] = {'read_bytes:': 0, 'write_bytes:': 0} if pid not in self.pidset1: self.pidset1[pid] = {'read_bytes:': 0, 'write_bytes:': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters for l in proc_splitlines('/proc/%s/io' % pid): if len(l) != 2: continue self.pidset2[pid][l[0]] = int(l[1]) except IOError: continue except IndexError: continue read_usage = (self.pidset2[pid]['read_bytes:'] - self.pidset1[pid]['read_bytes:']) * 1.0 / elapsed write_usage = (self.pidset2[pid]['write_bytes:'] - self.pidset1[pid]['write_bytes:']) * 1.0 / elapsed usage = read_usage + write_usage ### Get the process that spends the most jiffies if usage > self.val['usage']: self.val['usage'] = usage self.val['read_usage'] = read_usage self.val['write_usage'] = write_usage self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) # st = os.stat("/proc/%s" % pid) if step == op.delay: self.pidset1 = self.pidset2 if self.val['usage'] != 0.0: self.output = '%-*s%s %s' % (self.width-11, self.val['name'][0:self.width-11], cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024)) ### Debug (show PID) # self.output = '%*s %-*s%s %s' % (5, self.val['pid'], self.width-17, self.val['name'][0:self.width-17], cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024)) def showcsv(self): return '%s / %d:%d' % (self.val['name'], self.val['read_usage'], self.val['write_usage']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_bio_adv.py000066400000000000000000000063431351755116500203770ustar00rootroot00000000000000### Dstat all I/O process plugin ### Displays all processes' I/O read/write stats and CPU usage ### ### Authority: Guillermo Cantu Luna class dstat_plugin(dstat): def __init__(self): self.name = 'most expensive block i/o process' self.vars = ('process pid read write cpu',) self.type = 's' self.width = 40 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/io', os.R_OK): raise Exception('Kernel has no per-process I/O accounting [CONFIG_TASK_IO_ACCOUNTING], use at least 2.6.20') return True def extract(self): self.output = '' self.pidset2 = {} self.val['usage'] = 0.0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset2: self.pidset2[pid] = {'read_bytes:': 0, 'write_bytes:': 0, 'cputime:': 0, 'cpuper:': 0} if pid not in self.pidset1: self.pidset1[pid] = {'read_bytes:': 0, 'write_bytes:': 0, 'cputime:': 0, 'cpuper:': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters for l in proc_splitlines('/proc/%s/io' % pid): if len(l) != 2: continue self.pidset2[pid][l[0]] = int(l[1]) ### Get CPU usage l = proc_splitline('/proc/%s/stat' % pid) if len(l) < 15: cpu_usage = 0 else: self.pidset2[pid]['cputime:'] = int(l[13]) + int(l[14]) cpu_usage = (self.pidset2[pid]['cputime:'] - self.pidset1[pid]['cputime:']) * 1.0 / elapsed / cpunr except ValueError: continue except IOError: continue except IndexError: continue read_usage = (self.pidset2[pid]['read_bytes:'] - self.pidset1[pid]['read_bytes:']) * 1.0 / elapsed write_usage = (self.pidset2[pid]['write_bytes:'] - self.pidset1[pid]['write_bytes:']) * 1.0 / elapsed usage = read_usage + write_usage ### Get the process that spends the most jiffies if usage > self.val['usage']: self.val['usage'] = usage self.val['read_usage'] = read_usage self.val['write_usage'] = write_usage self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) self.val['cpu_usage'] = cpu_usage if step == op.delay: self.pidset1 = self.pidset2 if self.val['usage'] != 0.0: self.output = '%-*s%s%-5s%s%s%s%s%%' % (self.width-14-len(pid), self.val['name'][0:self.width-14-len(pid)], color['darkblue'], self.val['pid'], cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024), cprint(self.val['cpu_usage'], 'f', 3, 34), color['darkgray']) def showcsv(self): return 'Top: %s\t%s\t%s\t%s' % (self.val['name'][0:self.width-20], self.val['read_usage'], self.val['write_usage'], self.val['cpu_usage']) dstat-0.7.4/plugins/dstat_top_childwait.py000066400000000000000000000032641351755116500207430ustar00rootroot00000000000000### Dstat most expensive process plugin ### Displays the name of the most expensive process ### ### Authority: dag@wieers.com global cpunr class dstat_plugin(dstat): def __init__(self): self.name = 'most waiting for' self.vars = ('child process',) self.type = 's' self.width = 16 self.scale = 0 def extract(self): self.set2 = {} self.val['max'] = 0.0 for pid in proc_pidlist(): try: ### Using dopen() will cause too many open files l = proc_splitline('/proc/%s/stat' % pid) except IOError: continue if len(l) < 15: continue ### Reset previous value if it doesn't exist if pid not in self.set1: self.set1[pid] = 0 self.set2[pid] = int(l[15]) + int(l[16]) usage = (self.set2[pid] - self.set1[pid]) * 1.0 / elapsed / cpunr ### Is it a new topper ? if usage <= self.val['max']: continue self.val['max'] = usage self.val['name'] = getnamebypid(pid, l[1][1:-1]) self.val['pid'] = pid ### Debug (show PID) # self.val['process'] = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name']) if step == op.delay: self.set1 = self.set2 def show(self): if self.val['max'] == 0.0: return '%-*s' % (self.width, '') else: return '%s%-*s%s' % (theme['default'], self.width-3, self.val['name'][0:self.width-3], cprint(self.val['max'], 'p', 3, 34)) def showcsv(self): return '%s / %d%%' % (self.val['name'], self.val['max']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_cpu.py000066400000000000000000000036061351755116500175620ustar00rootroot00000000000000### Authority: Dag Wieers class dstat_plugin(dstat): """ Most expensive CPU process. Displays the process that uses the CPU the most during the monitored interval. The value displayed is the percentage of CPU time for the total amount of CPU processing power. Based on per process CPU information. """ def __init__(self): self.name = 'most expensive' self.vars = ('cpu process',) self.type = 's' self.width = 16 self.scale = 0 self.pidset1 = {} def extract(self): self.output = '' self.pidset2 = {} self.val['max'] = 0.0 for pid in proc_pidlist(): try: ### Using dopen() will cause too many open files l = proc_splitline('/proc/%s/stat' % pid) except IOError: continue if len(l) < 15: continue ### Reset previous value if it doesn't exist if pid not in self.pidset1: self.pidset1[pid] = 0 self.pidset2[pid] = int(l[13]) + int(l[14]) usage = (self.pidset2[pid] - self.pidset1[pid]) * 1.0 / elapsed / cpunr ### Is it a new topper ? if usage < self.val['max']: continue name = l[1][1:-1] self.val['max'] = usage self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) # self.val['name'] = name if self.val['max'] != 0.0: self.output = '%-*s%s' % (self.width-3, self.val['name'][0:self.width-3], cprint(self.val['max'], 'f', 3, 34)) ### Debug (show PID) # self.output = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name']) if step == op.delay: self.pidset1 = self.pidset2 def showcsv(self): return '%s / %d%%' % (self.val['name'], self.val['max']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_cpu_adv.py000066400000000000000000000061511351755116500204120ustar00rootroot00000000000000### Dstat all I/O process plugin ### Displays all processes' I/O read/write stats and CPU usage ### ### Authority: Guillermo Cantu Luna class dstat_plugin(dstat): def __init__(self): self.name = 'most expensive cpu process' self.vars = ('process pid cpu read write',) self.type = 's' self.width = 40 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/io', os.R_OK): raise Exception('Kernel has no per-process I/O accounting [CONFIG_TASK_IO_ACCOUNTING], use at least 2.6.20') return True def extract(self): self.output = '' self.pidset2 = {} self.val['cpu_usage'] = 0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset2: self.pidset2[pid] = {'rchar:': 0, 'wchar:': 0, 'cputime:': 0, 'cpuper:': 0} if pid not in self.pidset1: self.pidset1[pid] = {'rchar:': 0, 'wchar:': 0, 'cputime:': 0, 'cpuper:': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters for l in proc_splitlines('/proc/%s/io' % pid): if len(l) != 2: continue self.pidset2[pid][l[0]] = int(l[1]) ### Get CPU usage l = proc_splitline('/proc/%s/stat' % pid) if len(l) < 15: cpu_usage = 0.0 else: self.pidset2[pid]['cputime:'] = int(l[13]) + int(l[14]) cpu_usage = (self.pidset2[pid]['cputime:'] - self.pidset1[pid]['cputime:']) * 1.0 / elapsed / cpunr except ValueError: continue except IOError: continue except IndexError: continue read_usage = (self.pidset2[pid]['rchar:'] - self.pidset1[pid]['rchar:']) * 1.0 / elapsed write_usage = (self.pidset2[pid]['wchar:'] - self.pidset1[pid]['wchar:']) * 1.0 / elapsed ### Get the process that spends the most jiffies if cpu_usage > self.val['cpu_usage']: self.val['read_usage'] = read_usage self.val['write_usage'] = write_usage self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) self.val['cpu_usage'] = cpu_usage if step == op.delay: self.pidset1 = self.pidset2 if self.val['cpu_usage'] != 0.0: self.output = '%-*s%s%-5s%s%s%%%s%s' % (self.width-14-len(pid), self.val['name'][0:self.width-14-len(pid)], color['darkblue'], self.val['pid'], cprint(self.val['cpu_usage'], 'f', 3, 34), color['darkgray'],cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024)) def showcsv(self): return 'Top: %s\t%s\t%s\t%s' % (self.val['name'][0:self.width-20], self.val['cpu_usage'], self.val['read_usage'], self.val['write_usage']) dstat-0.7.4/plugins/dstat_top_cputime.py000066400000000000000000000044611351755116500204410ustar00rootroot00000000000000### Authority: dag@wieers.com ### For more information, see: ### http://eaglet.rain.com/rick/linux/schedstat/ class dstat_plugin(dstat): """ Name and total amount of CPU time consumed in milliseconds of the process that has the highest total amount of cputime for the measured timeframe. On a system with one CPU and one core, the total cputime is 1000ms. On a system with two cores the total cputime is 2000ms. """ def __init__(self): self.name = 'highest total' self.vars = ('cputime process',) self.type = 's' self.width = 17 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/schedstat', os.R_OK): raise Exception('Kernel has no scheduler statistics [CONFIG_SCHEDSTATS], use at least 2.6.12') def extract(self): self.output = '' self.pidset2 = {} self.val['result'] = 0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset1: self.pidset1[pid] = {'run_ticks': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters l = proc_splitline('/proc/%s/schedstat' % pid) except IOError: continue except IndexError: continue if len(l) != 3: continue self.pidset2[pid] = {'run_ticks': int(l[0])} totrun = (self.pidset2[pid]['run_ticks'] - self.pidset1[pid]['run_ticks']) * 1.0 / elapsed ### Get the process that spends the most jiffies if totrun > self.val['result']: self.val['result'] = totrun self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) if step == op.delay: self.pidset1 = self.pidset2 if self.val['result'] != 0.0: self.output = '%-*s%s' % (self.width-4, self.val['name'][0:self.width-4], cprint(self.val['result'], 'd', 4, 100)) ### Debug (show PID) # self.output = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name']) def showcsv(self): return '%s / %.4f' % (self.val['name'], self.val['result']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_cputime_avg.py000066400000000000000000000050411351755116500212710ustar00rootroot00000000000000### Authority: dag@wieers.com ### For more information, see: ### http://eaglet.rain.com/rick/linux/schedstat/ class dstat_plugin(dstat): """ Name and average amount of CPU time consumed in milliseconds of the process that has the highest average amount of cputime for the different slices for the measured timeframe. On a system with one CPU and one core, the total cputime is 1000ms. On a system with two cores the total cputime is 2000ms. """ def __init__(self): self.name = 'highest average' self.vars = ('cputime process',) self.type = 's' self.width = 17 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/schedstat', os.R_OK): raise Exception('Kernel has no scheduler statistics [CONFIG_SCHEDSTATS], use at least 2.6.12') def extract(self): self.output = '' self.pidset2 = {} self.val['result'] = 0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset1: self.pidset1[pid] = {'run_ticks': 0, 'ran': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters l = proc_splitline('/proc/%s/schedstat' % pid) except IOError: continue except IndexError: continue if len(l) != 3: continue self.pidset2[pid] = {'run_ticks': int(l[0]), 'ran': int(l[2])} if self.pidset2[pid]['ran'] - self.pidset1[pid]['ran'] > 0: avgrun = (self.pidset2[pid]['run_ticks'] - self.pidset1[pid]['run_ticks']) * 1.0 / (self.pidset2[pid]['ran'] - self.pidset1[pid]['ran']) / elapsed else: avgrun = 0 ### Get the process that spends the most jiffies if avgrun > self.val['result']: self.val['result'] = avgrun self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) if step == op.delay: self.pidset1 = self.pidset2 if self.val['result'] != 0.0: self.output = '%-*s%s' % (self.width-4, self.val['name'][0:self.width-4], cprint(self.val['result'], 'f', 4, 100)) ### Debug (show PID) # self.output = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name']) def showcsv(self): return '%s / %.4f' % (self.val['name'], self.val['result']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_int.py000066400000000000000000000036341351755116500175660ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Top interrupt Displays the name of the most frequent interrupt """ def __init__(self): self.name = 'most frequent' self.vars = ('interrupt',) self.type = 's' self.width = 20 self.scale = 0 self.intset1 = [ ] self.open('/proc/stat') self.names = self.names() def names(self): ret = {} for line in dopen('/proc/interrupts'): l = line.split() if len(l) <= cpunr: continue l1 = l[0].split(':')[0] ### Cleanup possible names from /proc/interrupts l2 = ' '.join(l[cpunr+3:]) l2 = l2.replace('_hcd:', '/') l2 = re.sub('@pci[:\d+\.]+', '', l2) l2 = re.sub('ahci\[[:\da-z\.]+\]', 'ahci', l2) ret[l1] = l2 return ret def extract(self): self.output = '' self.val['total'] = 0.0 for line in self.splitlines(): if line[0] == 'intr': self.intset2 = [ int(i) for i in line[3:] ] if not self.intset1: self.intset1 = [ 0 for i in self.intset2 ] for i in range(len(self.intset2)): total = (self.intset2[i] - self.intset1[i]) * 1.0 / elapsed ### Put the highest value in self.val if total > self.val['total']: if str(i+1) in self.names: self.val['name'] = self.names[str(i+1)] else: self.val['name'] = 'int ' + str(i+1) self.val['total'] = total if step == op.delay: self.intset1 = self.intset2 if self.val['total'] != 0.0: self.output = '%-15s%s' % (self.val['name'], cprint(self.val['total'], 'd', 5, 1000)) def showcsv(self): return '%s / %f' % (self.val['name'], self.val['total']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_io.py000066400000000000000000000053071351755116500174020ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): """ Top most expensive I/O process Displays the name of the most expensive I/O process """ def __init__(self): self.name = 'most expensive' self.vars = ('i/o process',) self.type = 's' self.width = 22 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/io', os.R_OK): raise Exception('Kernel has no per-process I/O accounting [CONFIG_TASK_IO_ACCOUNTING], use at least 2.6.20') def extract(self): self.output = '' self.pidset2 = {} self.val['usage'] = 0.0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset2: self.pidset2[pid] = {'rchar:': 0, 'wchar:': 0} if pid not in self.pidset1: self.pidset1[pid] = {'rchar:': 0, 'wchar:': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters for l in proc_splitlines('/proc/%s/io' % pid): if len(l) != 2: continue self.pidset2[pid][l[0]] = int(l[1]) except IOError: continue except IndexError: continue read_usage = (self.pidset2[pid]['rchar:'] - self.pidset1[pid]['rchar:']) * 1.0 / elapsed write_usage = (self.pidset2[pid]['wchar:'] - self.pidset1[pid]['wchar:']) * 1.0 / elapsed usage = read_usage + write_usage # if usage > 0.0: # print('%s %s:%s' % (pid, read_usage, write_usage)) ### Get the process that spends the most jiffies if usage > self.val['usage']: self.val['usage'] = usage self.val['read_usage'] = read_usage self.val['write_usage'] = write_usage self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) if step == op.delay: self.pidset1 = self.pidset2 if self.val['usage'] != 0.0: self.output = '%-*s%s %s' % (self.width-11, self.val['name'][0:self.width-11], cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024)) ### Debug (show PID) # self.output = '%*s %-*s%s %s' % (5, self.val['pid'], self.width-17, self.val['name'][0:self.width-17], cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024)) def showcsv(self): return '%s / %d:%d' % (self.val['name'], self.val['read_usage'], self.val['write_usage']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_io_adv.py000066400000000000000000000062611351755116500202340ustar00rootroot00000000000000### Dstat all I/O process plugin ### Displays all processes' I/O read/write stats and CPU usage ### ### Authority: Guillermo Cantu Luna class dstat_plugin(dstat): def __init__(self): self.name = 'most expensive i/o process' self.vars = ('process pid read write cpu',) self.type = 's' self.width = 40 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/io', os.R_OK): raise Exception('Kernel has no per-process I/O accounting [CONFIG_TASK_IO_ACCOUNTING], use at least 2.6.20') return True def extract(self): self.output = '' self.pidset2 = {} self.val['usage'] = 0.0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset2: self.pidset2[pid] = {'rchar:': 0, 'wchar:': 0, 'cputime:': 0, 'cpuper:': 0} if pid not in self.pidset1: self.pidset1[pid] = {'rchar:': 0, 'wchar:': 0, 'cputime:': 0, 'cpuper:': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters for l in proc_splitlines('/proc/%s/io' % pid): if len(l) != 2: continue self.pidset2[pid][l[0]] = int(l[1]) ### Get CPU usage l = proc_splitline('/proc/%s/stat' % pid) if len(l) < 15: cpu_usage = 0 else: self.pidset2[pid]['cputime:'] = int(l[13]) + int(l[14]) cpu_usage = (self.pidset2[pid]['cputime:'] - self.pidset1[pid]['cputime:']) * 1.0 / elapsed / cpunr except ValueError: continue except IOError: continue except IndexError: continue read_usage = (self.pidset2[pid]['rchar:'] - self.pidset1[pid]['rchar:']) * 1.0 / elapsed write_usage = (self.pidset2[pid]['wchar:'] - self.pidset1[pid]['wchar:']) * 1.0 / elapsed usage = read_usage + write_usage ### Get the process that spends the most jiffies if usage > self.val['usage']: self.val['usage'] = usage self.val['read_usage'] = read_usage self.val['write_usage'] = write_usage self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) self.val['cpu_usage'] = cpu_usage if step == op.delay: self.pidset1 = self.pidset2 if self.val['usage'] != 0.0: self.output = '%-*s%s%-5s%s%s%s%s%%' % (self.width-14-len(pid), self.val['name'][0:self.width-14-len(pid)], color['darkblue'], self.val['pid'], cprint(self.val['read_usage'], 'd', 5, 1024), cprint(self.val['write_usage'], 'd', 5, 1024), cprint(self.val['cpu_usage'], 'f', 3, 34), color['darkgray']) def showcsv(self): return 'Top: %s\t%s\t%s\t%s' % (self.val['name'][0:self.width-20], self.val['read_usage'], self.val['write_usage'], self.val['cpu_usage']) dstat-0.7.4/plugins/dstat_top_latency.py000066400000000000000000000043641351755116500204340ustar00rootroot00000000000000### Authority: Dag Wieers class dstat_plugin(dstat): """ Top process with highest total latency. Displays name and total amount of CPU time waited in milliseconds of the process that has the highest total amount waited for the measured timeframe. For more information see: http://eaglet.rain.com/rick/linux/schedstat/ """ def __init__(self): self.name = 'highest total' self.vars = ('latency process',) self.type = 's' self.width = 17 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/schedstat', os.R_OK): raise Exception('Kernel has no scheduler statistics [CONFIG_SCHEDSTATS], use at least 2.6.12') def extract(self): self.output = '' self.pidset2 = {} self.val['result'] = 0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset1: self.pidset1[pid] = {'wait_ticks': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters l = proc_splitline('/proc/%s/schedstat' % pid) except IOError: continue except IndexError: continue if len(l) != 3: continue self.pidset2[pid] = {'wait_ticks': int(l[1])} totwait = (self.pidset2[pid]['wait_ticks'] - self.pidset1[pid]['wait_ticks']) * 1.0 / elapsed ### Get the process that spends the most jiffies if totwait > self.val['result']: self.val['result'] = totwait self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) if step == op.delay: self.pidset1 = self.pidset2 if self.val['result'] != 0.0: self.output = '%-*s%s' % (self.width-4, self.val['name'][0:self.width-4], cprint(self.val['result'], 'd', 4, 100)) ### Debug (show PID) # self.output = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name']) def showcsv(self): return '%s / %.4f' % (self.val['name'], self.val['result']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_latency_avg.py000066400000000000000000000044771351755116500212760ustar00rootroot00000000000000### Dstat most expensive I/O process plugin ### Displays the name of the most expensive I/O process ### ### Authority: dag@wieers.com ### For more information, see: ### http://eaglet.rain.com/rick/linux/schedstat/ class dstat_plugin(dstat): def __init__(self): self.name = 'highest average' self.vars = ('latency process',) self.type = 's' self.width = 17 self.scale = 0 self.pidset1 = {} def check(self): if not os.access('/proc/self/schedstat', os.R_OK): raise Exception('Kernel has no scheduler statistics [CONFIG_SCHEDSTATS], use at least 2.6.12') def extract(self): self.output = '' self.pidset2 = {} self.val['result'] = 0 for pid in proc_pidlist(): try: ### Reset values if pid not in self.pidset1: self.pidset1[pid] = {'wait_ticks': 0, 'ran': 0} ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Extract counters l = proc_splitline('/proc/%s/schedstat' % pid) except IOError: continue except IndexError: continue if len(l) != 3: continue self.pidset2[pid] = {'wait_ticks': int(l[1]), 'ran': int(l[2])} if self.pidset2[pid]['ran'] - self.pidset1[pid]['ran'] > 0: avgwait = (self.pidset2[pid]['wait_ticks'] - self.pidset1[pid]['wait_ticks']) * 1.0 / (self.pidset2[pid]['ran'] - self.pidset1[pid]['ran']) / elapsed else: avgwait = 0 ### Get the process that spends the most jiffies if avgwait > self.val['result']: self.val['result'] = avgwait self.val['pid'] = pid self.val['name'] = getnamebypid(pid, name) if step == op.delay: self.pidset1 = self.pidset2 if self.val['result'] != 0.0: self.output = '%-*s%s' % (self.width-4, self.val['name'][0:self.width-4], cprint(self.val['result'], 'f', 4, 100)) ### Debug (show PID) # self.output = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name']) def showcsv(self): return '%s / %.4f' % (self.val['name'], self.val['result']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_mem.py000066400000000000000000000026671351755116500175570ustar00rootroot00000000000000### Authority: Dag Wieers class dstat_plugin(dstat): """ Most expensive CPU process. Displays the process that uses the CPU the most during the monitored interval. The value displayed is the percentage of CPU time for the total amount of CPU processing power. Based on per process CPU information. """ def __init__(self): self.name = 'most expensive' self.vars = ('memory process',) self.type = 's' self.width = 17 self.scale = 0 def extract(self): self.val['max'] = 0.0 for pid in proc_pidlist(): try: ### Using dopen() will cause too many open files l = proc_splitline('/proc/%s/stat' % pid) except IOError: continue if len(l) < 23: continue usage = int(l[23]) * pagesize ### Is it a new topper ? if usage <= self.val['max']: continue self.val['max'] = usage self.val['name'] = getnamebypid(pid, l[1][1:-1]) self.val['pid'] = pid self.output = '%-*s%s' % (self.width-5, self.val['name'][0:self.width-5], cprint(self.val['max'], 'f', 5, 1024)) ### Debug (show PID) # self.val['memory process'] = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name']) def showcsv(self): return '%s / %d%%' % (self.val['name'], self.val['max']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_top_oom.py000066400000000000000000000032711351755116500175630ustar00rootroot00000000000000### Author: Dag Wieers ### Dstat most expensive process plugin ### Displays the name of the most expensive process ### More information: ### http://lwn.net/Articles/317814/ class dstat_plugin(dstat): def __init__(self): self.name = 'out of memory' self.vars = ('kill score',) self.type = 's' self.width = 18 self.scale = 0 def check(self): if not os.access('/proc/self/oom_score', os.R_OK): raise Exception('Kernel does not support /proc/pid/oom_score, use at least 2.6.11.') def extract(self): self.output = '' self.val['max'] = 0.0 for pid in proc_pidlist(): try: ### Extract name name = proc_splitline('/proc/%s/stat' % pid)[1][1:-1] ### Using dopen() will cause too many open files l = proc_splitline('/proc/%s/oom_score' % pid) except IOError: continue except IndexError: continue if len(l) < 1: continue oom_score = int(l[0]) ### Is it a new topper ? if oom_score <= self.val['max']: continue self.val['max'] = oom_score self.val['name'] = getnamebypid(pid, name) self.val['pid'] = pid if self.val['max'] != 0.0: self.output = '%-*s%s' % (self.width-4, self.val['name'][0:self.width-4], cprint(self.val['max'], 'f', 4, 1000)) ### Debug (show PID) # self.output = '%*s %-*s' % (5, self.val['pid'], self.width-6, self.val['name']) def showcsv(self): return '%s / %d%%' % (self.val['name'], self.val['max']) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_utmp.py000066400000000000000000000017411351755116500170740ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'utmp' self.nick = ('ses', 'usr', 'adm' ) self.vars = ('sessions', 'users', 'root') self.type = 'd' self.width = 3 self.scale = 10 def check(self): try: global utmp import utmp except: raise Exception('Needs python-utmp module') def extract(self): for name in self.vars: self.val[name] = 0 for u in utmp.UtmpRecord(): # print('# type:%s pid:%s line:%s id:%s user:%s host:%s session:%s' % (i.ut_type, i.ut_pid, i.ut_line, i.ut_id, i.ut_user, i.ut_host, i.ut_session)) if u.ut_type == utmp.USER_PROCESS: self.val['users'] = self.val['users'] + 1 if u.ut_user == 'root': self.val['root'] = self.val['root'] + 1 self.val['sessions'] = self.val['sessions'] + 1 # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_vm_cpu.py000066400000000000000000000022211351755116500173720ustar00rootroot00000000000000### Author: Bert de Bruijn ### VMware cpu stats ### Displays CPU stats coming from the hypervisor inside VMware VMs. ### The vmGuestLib API from VMware Tools needs to be installed class dstat_plugin(dstat): def __init__(self): self.name = 'vm cpu' self.vars = ('used', 'stolen', 'elapsed') self.nick = ('usd', 'stl') self.type = 'p' self.width = 3 self.scale = 100 self.cpunr = getcpunr() def check(self): try: global vmguestlib import vmguestlib self.gl = vmguestlib.VMGuestLib() except: raise Exception('Needs python-vmguestlib module') def extract(self): self.gl.UpdateInfo() self.set2['elapsed'] = self.gl.GetElapsedMs() self.set2['stolen'] = self.gl.GetCpuStolenMs() self.set2['used'] = self.gl.GetCpuUsedMs() for name in ('stolen', 'used'): self.val[name] = (self.set2[name] - self.set1[name]) * 100 / (self.set2['elapsed'] - self.set1['elapsed']) / self.cpunr if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4 dstat-0.7.4/plugins/dstat_vm_mem.py000066400000000000000000000021611351755116500173640ustar00rootroot00000000000000### Author: Bert de Bruijn ### VMware memory stats ### Displays memory stats coming from the hypervisor inside VMware VMs. ### The vmGuestLib API from VMware Tools needs to be installed class dstat_plugin(dstat): def __init__(self): self.name = 'vmware memory' self.vars = ('active', 'ballooned', 'mapped', 'swapped', 'used') self.nick = ('active', 'balln', 'mappd', 'swapd', 'used') self.type = 'd' self.width = 5 self.scale = 1024 def check(self): try: global vmguestlib import vmguestlib self.gl = vmguestlib.VMGuestLib() except: raise Exception('Needs python-vmguestlib module') def extract(self): self.gl.UpdateInfo() self.val['active'] = self.gl.GetMemActiveMB() * 1024 ** 2 self.val['ballooned'] = self.gl.GetMemBalloonedMB() * 1024 ** 2 self.val['mapped'] = self.gl.GetMemMappedMB() * 1024 ** 2 self.val['swapped'] = self.gl.GetMemSwappedMB() * 1024 ** 2 self.val['used'] = self.gl.GetMemUsedMB() * 1024 ** 2 # vim:ts=4:sw=4 dstat-0.7.4/plugins/dstat_vm_mem_adv.py000066400000000000000000000027531351755116500202250ustar00rootroot00000000000000### Author: Bert de Bruijn ### VMware advanced memory stats ### Displays memory stats coming from the hypervisor inside VMware VMs. ### The vmGuestLib API from VMware Tools needs to be installed class dstat_plugin(dstat): def __init__(self): self.name = 'vmware advanced memory' self.vars = ('active', 'ballooned', 'mapped', 'overhead', 'saved', 'shared', 'swapped', 'targetsize', 'used') self.nick = ('active', 'balln', 'mappd', 'ovrhd', 'saved', 'shard', 'swapd', 'targt', 'used') self.type = 'd' self.width = 5 self.scale = 1024 def check(self): try: global vmguestlib import vmguestlib self.gl = vmguestlib.VMGuestLib() except: raise Exception('Needs python-vmguestlib module') def extract(self): self.gl.UpdateInfo() self.val['active'] = self.gl.GetMemActiveMB() * 1024 ** 2 self.val['ballooned'] = self.gl.GetMemBalloonedMB() * 1024 ** 2 self.val['mapped'] = self.gl.GetMemMappedMB() * 1024 ** 2 self.val['overhead'] = self.gl.GetMemOverheadMB() * 1024 ** 2 self.val['saved'] = self.gl.GetMemSharedSavedMB() * 1024 ** 2 self.val['shared'] = self.gl.GetMemSharedMB() * 1024 ** 2 self.val['swapped'] = self.gl.GetMemSwappedMB() * 1024 ** 2 self.val['targetsize'] = self.gl.GetMemTargetSizeMB() * 1024 ** 2 self.val['used'] = self.gl.GetMemUsedMB() * 1024 ** 2 # vim:ts=4:sw=4 dstat-0.7.4/plugins/dstat_vmk_hba.py000066400000000000000000000054501351755116500175170ustar00rootroot00000000000000### Author: Bert de Bruijn ### VMware ESX kernel vmhba stats ### Displays kernel vmhba statistics on VMware ESX servers # NOTE TO USERS: command-line plugin configuration is not yet possible, so I've # "borrowed" the -D argument. # EXAMPLES: # # dstat --vmkhba -D vmhba1,vmhba2,total # # dstat --vmkhba -D vmhba0 # You can even combine the Linux and VMkernel diskstats (but the "total" argument # will be used by both). # # dstat --vmkhba -d -D sda,vmhba1 class dstat_plugin(dstat): def __init__(self): self.name = 'vmkhba' self.nick = ('read', 'writ') self.cols = 2 def discover(self, *list): # discover will list all vmhba's found. # we might want to filter out the unused vmhba's (read stats, compare with ['0', ] * 13) ret = [] try: list = os.listdir('/proc/vmware/scsi/') except: raise Exception('Needs VMware ESX') for name in list: for line in dopen('/proc/vmware/scsi/%s/stats' % name).readlines(): l = line.split() if len(l) < 13: continue if l[0] == 'cmds': continue if l == ['0', ] * 13: continue ret.append(name) return ret def vars(self): # vars will take the argument list - when implemented - , use total, or will use discover + total ret = [] if op.disklist: list = op.disklist #elif not op.full: # list = ('total', ) else: list = self.discover list.sort() for name in list: if name in self.discover + ['total']: ret.append(name) return ret def check(self): try: os.listdir('/proc/vmware') except: raise Exception('Needs VMware ESX') info(1, 'The vmkhba module is an EXPERIMENTAL module.') def extract(self): self.set2['total'] = (0, 0) for name in self.vars: self.set2[name] = (0, 0) for name in os.listdir('/proc/vmware/scsi/'): for line in dopen('/proc/vmware/scsi/%s/stats' % name).readlines(): l = line.split() if len(l) < 13: continue if l[0] == 'cmds': continue if l[2] == '0' and l[4] == '0': continue if l == ['0', ] * 13: continue self.set2['total'] = ( self.set2['total'][0] + int(l[2]), self.set2['total'][1] + int(l[4]) ) if name in self.vars and name != 'total': self.set2[name] = ( int(l[2]), int(l[4]) ) for name in self.set2: self.val[name] = list(map(lambda x, y: (y - x) * 1024.0 / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) dstat-0.7.4/plugins/dstat_vmk_int.py000066400000000000000000000061671351755116500175650ustar00rootroot00000000000000### Author: Bert de Bruijn ### VMware ESX kernel interrupt stats ### Displays kernel interrupt statistics on VMware ESX servers # NOTE TO USERS: command-line plugin configuration is not yet possible, so I've # "borrowed" the -I argument. # EXAMPLES: # # dstat --vmkint -I 0x46,0x5a # You can even combine the Linux and VMkernel interrupt stats # # dstat --vmkint -i -I 14,0x5a # Look at /proc/vmware/interrupts to see which interrupt is linked to which function class dstat_plugin(dstat): def __init__(self): self.name = 'vmkint' self.type = 'd' self.width = 4 self.scale = 1000 self.open('/proc/vmware/interrupts') # self.intmap = self.intmap() # def intmap(self): # ret = {} # for line in dopen('/proc/vmware/interrupts').readlines(): # l = line.split() # if len(l) <= self.vmkcpunr: continue # l1 = l[0].split(':')[0] # l2 = ' '.join(l[vmkcpunr()+1:]).split(',') # ret[l1] = l1 # for name in l2: # ret[name.strip().lower()] = l1 # return ret def vmkcpunr(self): #the service console sees only one CPU, so cpunr == 1, only the vmkernel sees all CPUs ret = [] # default cpu number is 2 ret = 2 for l in self.fd[0].splitlines(): if l[0] == 'Vector': ret = int( int( l[-1] ) + 1 ) return ret def discover(self): #interrupt names are not decimal numbers, but rather hexadecimal numbers like 0x7e ret = [] self.fd[0].seek(0) for line in self.fd[0].readlines(): l = line.split() if l[0] == 'Vector': continue if len(l) < self.vmkcpunr()+1: continue name = l[0].split(':')[0] amount = 0 for i in l[1:1+self.vmkcpunr()]: amount = amount + int(i) if amount > 20: ret.append(str(name)) return ret def vars(self): ret = [] if op.intlist: list = op.intlist else: list = self.discover # len(list) > 5: list = list[-5:] for name in list: if name in self.discover: ret.append(name) # elif name.lower() in self.intmap: # ret.append(self.intmap[name.lower()]) return ret def check(self): try: os.listdir('/proc/vmware') except: raise Exception('Needs VMware ESX') info(1, 'The vmkint module is an EXPERIMENTAL module.') def extract(self): self.fd[0].seek(0) for line in self.fd[0].readlines(): l = line.split() if len(l) < self.vmkcpunr()+1: continue name = l[0].split(':')[0] if name in self.vars: self.set2[name] = 0 for i in l[1:1+self.vmkcpunr()]: self.set2[name] = self.set2[name] + int(i) for name in self.set2: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4 dstat-0.7.4/plugins/dstat_vmk_nic.py000066400000000000000000000051301351755116500175310ustar00rootroot00000000000000### Author: Bert de Bruijn ### VMware ESX kernel vmknic stats ### Displays VMkernel port statistics on VMware ESX servers # NOTE TO USERS: command-line plugin configuration is not yet possible, so I've # "borrowed" the -N argument. # EXAMPLES: # # dstat --vmknic -N vmk1 # You can even combine the Linux and VMkernel network stats (just don't just "total"). # # dstat --vmknic -n -N vmk0,vswif0 # NB Data comes from /proc/vmware/net/tcpip/ifconfig class dstat_plugin(dstat): def __init__(self): self.name = 'vmknic' self.nick = ('recv', 'send') self.open('/proc/vmware/net/tcpip/ifconfig') self.cols = 2 def check(self): try: os.listdir('/proc/vmware') except: raise Exception('Needs VMware ESX') info(1, 'The vmknic module is an EXPERIMENTAL module.') def discover(self, *list): ret = [] for l in self.fd[0].splitlines(replace=' /', delim='/'): if len(l) != 12: continue if l[2][:5] == ' #Version: 2.2 #VEID user nice system uptime idle strv uptime used maxlat totlat numsched #302 142926 0 10252 152896388 852779112954062 0 427034187248480 1048603937010 0 0 0 #301 27188 0 7896 152899846 853267000490282 0 427043845492614 701812592320 0 0 0 class dstat_plugin(dstat): def __init__(self): self.nick = ('usr', 'sys', 'idl', 'nic') self.type = 'p' self.width = 3 self.scale = 34 self.open('/proc/vz/vestat') self.cols = 4 def check(self): info(1, 'Module %s is still experimental.' % self.filename) def discover(self, *list): ret = [] for l in self.splitlines(): if len(l) < 6 or l[0] == 'VEID': continue ret.append(l[0]) ret.sort() for item in list: ret.append(item) return ret def name(self): ret = [] for name in self.vars: if name == 'total': ret.append('total ve usage') else: ret.append('ve ' + name + ' usage') return ret def vars(self): ret = [] if not op.full: list = ('total', ) else: list = self.discover for name in list: if name in self.discover + ['total']: ret.append(name) return ret def extract(self): self.set2['total'] = [0, 0, 0, 0] for l in self.splitlines(): if len(l) < 6 or l[0] == 'VEID': continue name = l[0] self.set2[name] = ( int(l[1]), int(l[3]), int(l[4]) - int(l[1]) - int(l[2]) - int(l[3]), int(l[2]) ) self.set2['total'] = ( self.set2['total'][0] + int(l[1]), self.set2['total'][1] + int(l[3]), self.set2['total'][2] + int(l[4]) - int(l[1]) - int(l[2]) - int(l[3]), self.set2['total'][3] + int(l[2]) ) for name in self.vars: for i in range(self.cols): self.val[name][i] = 100.0 * (self.set2[name][i] - self.set1[name][i]) / (sum(self.set2[name]) - sum(self.set1[name])) if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_vz_io.py000066400000000000000000000052661351755116500172430ustar00rootroot00000000000000### Author: Dag Wieers ### Example content for /proc/bc//ioacct # read 2773011640320 # write 2095707136000 # dirty 4500342390784 # cancel 4080624041984 # missed 0 # syncs_total 2 # fsyncs_total 1730732 # fdatasyncs_total 3266 # range_syncs_total 0 # syncs_active 0 # fsyncs_active 0 # fdatasyncs_active 0 # range_syncs_active 0 # vfs_reads 3717331387 # vfs_read_chars 3559144863185798078 # vfs_writes 901216138 # vfs_write_chars 23864660931174682 # io_pbs 16 class dstat_plugin(dstat): def __init__(self): self.nick = ['read', 'write', 'dirty', 'cancel', 'missed'] self.cols = len(self.nick) def check(self): if not os.path.exists('/proc/vz'): raise Exception('System does not have OpenVZ support') elif not os.path.exists('/proc/bc'): raise Exception('System does not have (new) OpenVZ beancounter support') elif not glob.glob('/proc/bc/*/ioacct'): raise Exception('System does not have any OpenVZ containers') info(1, 'Module %s is still experimental.' % self.filename) def name(self): return ['ve/'+name for name in self.vars] def vars(self): ret = [] if not op.full: varlist = ['total',] else: varlist = [os.path.basename(veid) for veid in glob.glob('/proc/vz/*')] ret = varlist return ret def extract(self): for name in self.vars: self.set2['total'] = {} for line in dopen('/proc/bc/%s/ioacct' % name).readlines(): l = line.split() if len(l) != 2: continue if l[0] not in self.nick: continue index = self.nick.index(l[0]) self.set2[name][index] = int(l[1]) self.set2['total'][index] = self.set2['total'][index] + int(l[1]) # print(name, self.val[name], self.set2[name][0], self.set2[name][1]) # print(name, self.val[name], self.set1[name][0], self.set1[name][1]) self.val[name] = list(map(lambda x, y: (y - x) / elapsed, self.set1[name], self.set2[name])) if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_vz_ubc.py000066400000000000000000000037321351755116500174010ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.nick = ('fcnt', ) self.type = 'd' self.width = 5 self.scale = 1000 self.open('/proc/user_beancounters') self.cols = 1 ### Is this correct ? def check(self): info(1, 'Module %s is still experimental.' % self.filename) def discover(self, *list): ret = [] for l in self.splitlines(): if len(l) < 7 or l[0] in ('uid', '0:'): continue ret.append(l[0][0:-1]) ret.sort() for item in list: ret.append(item) return ret def name(self): ret = [] for name in self.vars: if name == 'total': ret.append('total failcnt') else: ret.append(name) return ret def vars(self): ret = [] if not op.full: list = ('total', ) else: list = self.discover for name in list: if name in self.discover + ['total']: ret.append(name) return ret def extract(self): for name in self.vars + ['total']: self.set2[name] = 0 for l in self.splitlines(): if len(l) < 6 or l[0] == 'uid': continue elif len(l) == 7: name = l[0][0:-1] if name in self.vars: self.set2[name] = self.set2[name] + int(l[6]) self.set2['total'] = self.set2['total'] + int(l[6]) elif name == '0': continue else: if name in self.vars: self.set2[name] = self.set2[name] + int(l[5]) self.set2['total'] = self.set2['total'] + int(l[5]) for name in self.vars: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_wifi.py000066400000000000000000000016511351755116500170450ustar00rootroot00000000000000### Author: Dag Wieers class dstat_plugin(dstat): def __init__(self): self.name = 'wifi' self.nick = ('lnk', 's/n') self.type = 'd' self.width = 3 self.scale = 34 self.cols = 2 def check(self): global iwlibs from pythonwifi import iwlibs def vars(self): return iwlibs.getNICnames() def extract(self): for name in self.vars: wifi = iwlibs.Wireless(name) stat, qual, discard, missed_beacon = wifi.getStatistics() # print(qual.quality, qual.signallevel, qual.noiselevel) if qual.quality == 0 or qual.signallevel == -101 or qual.noiselevel == -101 or qual.signallevel == -256 or qual.noiselevel == -256: self.val[name] = ( -1, -1 ) else: self.val[name] = ( qual.quality, qual.signallevel * 100 / qual.noiselevel ) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_zfs_arc.py000066400000000000000000000024471351755116500175420ustar00rootroot00000000000000class dstat_plugin(dstat): """ ZFS on Linux ARC (Adjustable Replacement Cache) Data is extracted from /proc/spl/kstat/zfs/arcstats """ def __init__(self): self.name = 'ZFS ARC' self.nick = ('mem', 'hit', 'miss', 'reads', 'hit%') self.vars = ('size', 'hits', 'misses', 'total', 'hit_rate') self.types = ('b', 'd', 'd', 'd', 'p') self.scales = (1024, 1000, 1000, 1000, 1000) self.counter = (False, True, True, False, False) self.open('/proc/spl/kstat/zfs/arcstats') def extract(self): for l in self.splitlines(): if len(l) < 2: continue l[0].split() name = l[0] if name in self.vars: self.set2[name] = int(l[2]) for i, name in enumerate (self.vars): if self.counter[i]: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed else: self.val[name] = self.set2[name] self.val['total'] = self.val['hits'] + self.val['misses'] if self.val['total'] > 0 : self.val['hit_rate'] = self.val['hits'] / self.val['total'] * 100.0 else: self.val['hit_rate'] = 0 if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_zfs_l2arc.py000066400000000000000000000025271351755116500177770ustar00rootroot00000000000000class dstat_plugin(dstat): """ ZFS on Linux L2ARC (Level 2 Adjustable Replacement Cache) Data is extracted from /proc/spl/kstat/zfs/arcstats """ def __init__(self): self.name = 'ZFS L2ARC' self.nick = ('size', 'hit', 'miss', 'hit%', 'read', 'write') self.vars = ('l2_size', 'l2_hits', 'l2_misses', 'hit_rate', 'l2_read_bytes', 'l2_write_bytes') self.types = ('b', 'd', 'd', 'p', 'b', 'b') self.scales = (1024, 1000, 1000, 1000, 1024, 1024) self.counter = (False, True, True, False, True, True) self.open('/proc/spl/kstat/zfs/arcstats') def extract(self): for l in self.splitlines(): if len(l) < 2: continue l[0].split() name = l[0] if name in self.vars: self.set2[name] = int(l[2]) for i, name in enumerate (self.vars): if self.counter[i]: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed else: self.val[name] = self.set2[name] probes = self.val['l2_hits'] + self.val['l2_misses'] if probes > 0 : self.val['hit_rate'] = self.val['l2_hits'] / probes * 100.0 else: self.val['hit_rate'] = 0 if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/plugins/dstat_zfs_zil.py000066400000000000000000000017511351755116500175700ustar00rootroot00000000000000class dstat_plugin(dstat): """ ZFS on Linux ZIL (ZFS Intent Log) Data is extracted from /proc/spl/kstat/zfs/zil """ def __init__(self): self.name = 'ZFS ZIL' self.nick = ('count', 'bytes') self.vars = ('zil_itx_metaslab_slog_count', 'zil_itx_metaslab_slog_bytes') self.types = ('d', 'b') self.scales = (1000, 1024) self.counter = (True, True) self.open('/proc/spl/kstat/zfs/zil') def extract(self): for l in self.splitlines(): if len(l) < 2: continue l[0].split() name = l[0] if name in self.vars: self.set2[name] = int(l[2]) for i, name in enumerate (self.vars): if self.counter[i]: self.val[name] = (self.set2[name] - self.set1[name]) * 1.0 / elapsed else: self.val[name] = self.set2[name] if step == op.delay: self.set1.update(self.set2) # vim:ts=4:sw=4:et dstat-0.7.4/proc/000077500000000000000000000000001351755116500136155ustar00rootroot00000000000000dstat-0.7.4/proc/diskstats-2.6.11000066400000000000000000000070121351755116500162740ustar00rootroot00000000000000 1 0 ram0 0 0 0 0 0 0 0 0 0 0 0 1 1 ram1 0 0 0 0 0 0 0 0 0 0 0 1 2 ram2 0 0 0 0 0 0 0 0 0 0 0 1 3 ram3 0 0 0 0 0 0 0 0 0 0 0 1 4 ram4 0 0 0 0 0 0 0 0 0 0 0 1 5 ram5 0 0 0 0 0 0 0 0 0 0 0 1 6 ram6 0 0 0 0 0 0 0 0 0 0 0 1 7 ram7 0 0 0 0 0 0 0 0 0 0 0 1 8 ram8 0 0 0 0 0 0 0 0 0 0 0 1 9 ram9 0 0 0 0 0 0 0 0 0 0 0 1 10 ram10 0 0 0 0 0 0 0 0 0 0 0 1 11 ram11 0 0 0 0 0 0 0 0 0 0 0 1 12 ram12 0 0 0 0 0 0 0 0 0 0 0 1 13 ram13 0 0 0 0 0 0 0 0 0 0 0 1 14 ram14 0 0 0 0 0 0 0 0 0 0 0 1 15 ram15 0 0 0 0 0 0 0 0 0 0 0 3 0 hda 701516 942900 12172907 6225444 284041 770510 6545342 14685351 0 3033115 20954254 3 1 hda1 51 73 0 0 3 2 hda2 970571 970620 265871 265871 3 3 hda3 3 6 0 0 3 5 hda5 633013 10305474 647709 5181672 3 6 hda6 22653 181209 51758 414064 3 7 hda7 4505 538299 1 8 3 8 hda8 1075 32107 1 8 3 9 hda9 4685 80135 90117 683719 22 0 hdc 3699 183366 748408 154422 0 0 0 0 0 153383 154422 9 0 md0 0 0 0 0 0 0 0 0 0 0 0 253 0 dm-0 0 0 0 0 0 0 0 0 0 0 0 7 0 loop0 0 0 0 0 0 0 0 0 0 0 0 7 1 loop1 0 0 0 0 0 0 0 0 0 0 0 7 2 loop2 0 0 0 0 0 0 0 0 0 0 0 7 3 loop3 0 0 0 0 0 0 0 0 0 0 0 7 4 loop4 0 0 0 0 0 0 0 0 0 0 0 7 5 loop5 0 0 0 0 0 0 0 0 0 0 0 7 6 loop6 0 0 0 0 0 0 0 0 0 0 0 7 7 loop7 0 0 0 0 0 0 0 0 0 0 0 7 8 loop8 0 0 0 0 0 0 0 0 0 0 0 7 9 loop9 0 0 0 0 0 0 0 0 0 0 0 7 10 loop10 0 0 0 0 0 0 0 0 0 0 0 7 11 loop11 0 0 0 0 0 0 0 0 0 0 0 7 12 loop12 0 0 0 0 0 0 0 0 0 0 0 7 13 loop13 0 0 0 0 0 0 0 0 0 0 0 7 14 loop14 0 0 0 0 0 0 0 0 0 0 0 7 15 loop15 0 0 0 0 0 0 0 0 0 0 0 7 16 loop16 0 0 0 0 0 0 0 0 0 0 0 7 17 loop17 0 0 0 0 0 0 0 0 0 0 0 7 18 loop18 0 0 0 0 0 0 0 0 0 0 0 7 19 loop19 0 0 0 0 0 0 0 0 0 0 0 7 20 loop20 0 0 0 0 0 0 0 0 0 0 0 7 21 loop21 0 0 0 0 0 0 0 0 0 0 0 7 22 loop22 0 0 0 0 0 0 0 0 0 0 0 7 23 loop23 0 0 0 0 0 0 0 0 0 0 0 7 24 loop24 0 0 0 0 0 0 0 0 0 0 0 7 25 loop25 0 0 0 0 0 0 0 0 0 0 0 7 26 loop26 0 0 0 0 0 0 0 0 0 0 0 7 27 loop27 0 0 0 0 0 0 0 0 0 0 0 7 28 loop28 0 0 0 0 0 0 0 0 0 0 0 7 29 loop29 0 0 0 0 0 0 0 0 0 0 0 7 30 loop30 0 0 0 0 0 0 0 0 0 0 0 7 31 loop31 0 0 0 0 0 0 0 0 0 0 0 7 32 loop32 0 0 0 0 0 0 0 0 0 0 0 7 33 loop33 0 0 0 0 0 0 0 0 0 0 0 7 34 loop34 0 0 0 0 0 0 0 0 0 0 0 7 35 loop35 0 0 0 0 0 0 0 0 0 0 0 7 36 loop36 0 0 0 0 0 0 0 0 0 0 0 7 37 loop37 0 0 0 0 0 0 0 0 0 0 0 7 38 loop38 0 0 0 0 0 0 0 0 0 0 0 7 39 loop39 0 0 0 0 0 0 0 0 0 0 0 7 40 loop40 0 0 0 0 0 0 0 0 0 0 0 7 41 loop41 0 0 0 0 0 0 0 0 0 0 0 7 42 loop42 0 0 0 0 0 0 0 0 0 0 0 7 43 loop43 0 0 0 0 0 0 0 0 0 0 0 7 44 loop44 0 0 0 0 0 0 0 0 0 0 0 7 45 loop45 0 0 0 0 0 0 0 0 0 0 0 7 46 loop46 0 0 0 0 0 0 0 0 0 0 0 7 47 loop47 0 0 0 0 0 0 0 0 0 0 0 7 48 loop48 0 0 0 0 0 0 0 0 0 0 0 7 49 loop49 0 0 0 0 0 0 0 0 0 0 0 7 50 loop50 0 0 0 0 0 0 0 0 0 0 0 7 51 loop51 0 0 0 0 0 0 0 0 0 0 0 7 52 loop52 0 0 0 0 0 0 0 0 0 0 0 7 53 loop53 0 0 0 0 0 0 0 0 0 0 0 7 54 loop54 0 0 0 0 0 0 0 0 0 0 0 7 55 loop55 0 0 0 0 0 0 0 0 0 0 0 7 56 loop56 0 0 0 0 0 0 0 0 0 0 0 7 57 loop57 0 0 0 0 0 0 0 0 0 0 0 7 58 loop58 0 0 0 0 0 0 0 0 0 0 0 7 59 loop59 0 0 0 0 0 0 0 0 0 0 0 7 60 loop60 0 0 0 0 0 0 0 0 0 0 0 7 61 loop61 0 0 0 0 0 0 0 0 0 0 0 7 62 loop62 0 0 0 0 0 0 0 0 0 0 0 7 63 loop63 0 0 0 0 0 0 0 0 0 0 0 dstat-0.7.4/proc/partitions-2.4.21000066400000000000000000000032761351755116500164660ustar00rootroot00000000000000major minor #blocks name rio rmerge rsect ruse wio wmerge wsect wuse running use aveq 22 0 20094480 hdc 18502 68378 614922 575620 63 16 632 660 -9 28571324 42702177 22 1 48163 hdc1 46 70 232 620 0 0 0 0 0 310 620 22 2 5116702 hdc2 15272 58372 588450 543550 63 16 632 660 0 31290 544240 22 3 5116702 hdc3 46 70 232 620 0 0 0 0 0 310 620 22 4 1 hdc4 0 0 0 0 0 0 0 0 0 0 0 22 5 5116671 hdc5 46 70 232 540 0 0 0 0 0 280 540 22 6 2562336 hdc6 46 70 232 610 0 0 0 0 0 310 610 22 7 514048 hdc7 46 70 232 580 0 0 0 0 0 290 580 22 8 265041 hdc8 58 98 312 360 0 0 0 0 0 180 360 22 9 1349428 hdc9 46 70 232 320 0 0 0 0 0 160 320 3 0 80043264 hda 165456390 118310028 -2026323812 11081595 319064897 471719179 2045961426 3973385 -6386 23654554 26608111 3 1 1020096 hda1 1022946 359030 11055850 13886300 18524433 42441169 487840520 19795761 0 15952587 33844701 3 2 33792727 hda2 16867355 41487758 466839562 3680943 49279950 136588121 1489574368 3787 0 7851978 6830330 3 3 20482875 hda3 21222823 25575094 374388250 40543161 47823459 36322836 675746654 7388934 0 15848238 6467323 3 4 1 hda4 0 0 0 0 0 0 0 0 0 0 0 3 5 19968763 hda5 12553421 16972432 236204218 36866208 132089301 133660978 2128837078 24946643 0 4783292 19031628 3 6 3068383 hda6 2219407 1764003 31864346 28167600 15941594 29654857 365639372 10253628 0 18729247 38579828 3 7 1020096 hda7 39200 370913 3278858 960450 939530 2371952 26950130 41566540 0 10284610 42535080 3 8 522081 hda8 111525236 31538692 1144516184 15779080 54466614 90679266 1166340568 28867099 0 7075683 3835907 dstat-0.7.4/proc/partitions-2.4.24000066400000000000000000000020361351755116500164620ustar00rootroot00000000000000major minor #blocks name rio rmerge rsect ruse wio wmerge wsect wuse running use aveq 22 0 4224150 ide/host0/bus1/target0/lun0/disc 24958 39400 473150 343900 53587 23309 500312 5447730 0 380490 5795870 22 1 1023876 ide/host0/bus1/target0/lun0/part1 3592 4076 61498 43970 27109 5490 261392 546010 0 110060 590040 22 2 130882 ide/host0/bus1/target0/lun0/part2 1 0 8 10 0 0 0 0 0 10 10 22 3 2047815 ide/host0/bus1/target0/lun0/part3 16859 32873 397730 253060 21988 2699 199512 3131240 0 277340 3388480 22 4 1021545 ide/host0/bus1/target0/lun0/part4 4505 2448 13906 46850 4490 15120 39408 1770480 0 73390 1817330 3 0 8257032 ide/host0/bus0/target0/lun0/disc 1 3 8 10 0 0 0 0 0 10 10 3 1 2048256 ide/host0/bus0/target0/lun0/part1 0 0 0 0 0 0 0 0 0 0 0 3 2 3068415 ide/host0/bus0/target0/lun0/part2 0 0 0 0 0 0 0 0 0 0 0 3 3 1534207 ide/host0/bus0/target0/lun0/part3 0 0 0 0 0 0 0 0 0 0 0 3 4 1598467 ide/host0/bus0/target0/lun0/part4 0 0 0 0 0 0 0 0 0 0 0 dstat-0.7.4/proc/partitions-2.6.11000066400000000000000000000030331351755116500164560ustar00rootroot00000000000000major minor #blocks name 3 0 78150744 hda 3 1 4611568 hda1 3 2 3591000 hda2 3 3 1 hda3 3 5 11801128 hda5 3 6 536728 hda6 3 7 6168928 hda7 3 8 6191608 hda8 3 9 45249592 hda9 253 0 4587520 dm-0 7 0 652862 loop0 7 1 653886 loop1 7 2 506174 loop2 7 3 631824 loop3 7 4 652852 loop4 7 5 651854 loop5 7 6 395278 loop6 7 7 652478 loop7 7 8 653444 loop8 7 9 652756 loop9 7 10 619972 loop10 7 11 137586 loop11 7 12 666538 loop12 7 13 659278 loop13 7 14 152052 loop14 7 15 157376 loop15 7 16 636162 loop16 7 17 649870 loop17 7 18 201976 loop18 7 19 102298 loop19 7 20 596550 loop20 7 21 250010 loop21 7 22 4332 loop22 7 23 47104 loop23 7 24 145808 loop24 7 25 640022 loop25 7 26 653232 loop26 7 27 208950 loop27 7 28 164716 loop28 7 29 641798 loop29 7 30 645894 loop30 7 31 614270 loop31 7 32 177882 loop32 7 33 594366 loop33 7 34 649578 loop34 7 35 641730 loop35 7 36 242246 loop36 7 37 205478 loop37 7 38 631000 loop38 7 39 653880 loop39 7 40 653894 loop40 7 41 183950 loop41 7 42 63614 loop42 7 43 157376 loop43 dstat-0.7.4/proc/stat-2.4.21000066400000000000000000000006351351755116500152410ustar00rootroot00000000000000cpu 83497797 33759 24080131 1872241329 58643672 2895401 1573921 cpu0 83497797 33759 24080131 1872241329 58643672 2895401 1573921 page 1138148875 1022982406 swap 143064392 145792094 intr 696424900 2042966010 2 0 0 0 0 0 0 1 0 0 2463880312 0 0 484527294 18577 disk_io: (3,0):(486355229,165458063,2268642596,320897166,2045955946) ctxt 3597668172 btime 1096375576 processes 31542089 procs_running 1 procs_blocked 1 dstat-0.7.4/proc/stat-2.4.24000066400000000000000000000004001351755116500152320ustar00rootroot00000000000000cpu 421667 648 73528 27329820 cpu0 421667 648 73528 27329820 page 582900 3939237 swap 46 142 intr 29160226 27825663 2 0 0 0 0 2 0 0 0 897241 0 0 0 437314 4 disk_io: (3,0):(440938,35718,1165792,405220,7878474) ctxt 2787728 btime 1116449546 processes 49298 dstat-0.7.4/proc/stat-2.6.11000066400000000000000000000004241351755116500152360ustar00rootroot00000000000000cpu 2185800 2819 824286 21360661 305445 13332 0 0 cpu0 2185800 2819 824286 21360661 305445 13332 0 0 intr 274063051 246960995 390491 0 4 4 21162475 0 2 7 2135609 1 677108 1623104 0 985826 127425 ctxt 327578142 btime 1116552999 processes 76650 procs_running 1 procs_blocked 0