lttnganalyses-0.6.1/0000775000175000017500000000000013033742625016063 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/lttng-irqfreq0000775000175000017500000000235012553274232020610 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import irq if __name__ == '__main__': irq.runfreq() lttnganalyses-0.6.1/lttng-periodtop0000775000175000017500000000235512746220524021150 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import periods if __name__ == '__main__': periods.runtop() lttnganalyses-0.6.1/README.rst0000664000175000017500000020505513033476426017564 0ustar mjeansonmjeanson00000000000000LTTng analyses ************** .. image:: https://img.shields.io/pypi/v/lttnganalyses.svg?label=Latest%20version :target: https://pypi.python.org/pypi/lttnganalyses :alt: Latest version released on PyPi .. image:: https://travis-ci.org/lttng/lttng-analyses.svg?branch=master&label=Travis%20CI%20build :target: https://travis-ci.org/lttng/lttng-analyses :alt: Status of Travis CI .. image:: https://img.shields.io/jenkins/s/https/ci.lttng.org/lttng-analyses_master_build.svg?label=LTTng%20CI%20build :target: https://ci.lttng.org/job/lttng-analyses_master_build :alt: Status of LTTng CI The **LTTng analyses** are a set of various executable analyses to extract and visualize monitoring data and metrics from `LTTng `_ kernel traces on the command line. As opposed to other "live" diagnostic or monitoring solutions, this approach is based on the following workflow: #. Record your system's activity with LTTng, a low-overhead tracer. #. Do whatever it takes for your problem to occur. #. Diagnose your problem's cause **offline** (when tracing is stopped). This solution allows you to target problems that are hard to find and to "dig" until the root cause is found. **Current limitations**: - The LTTng analyses can be quite slow to execute. There are a number of places where they could be optimized, but using the Python interpreter seems to be an important impediment. This project is regarded by its authors as a testing ground to experiment analysis features, user interfaces, and usability in general. It is not considered ready to analyze long traces. **Contents**: .. contents:: :local: :depth: 3 :backlinks: none Install LTTng analyses ====================== .. NOTE:: The version 2.0 of `Trace Compass `_ requires LTTng analyses 0.4: Trace Compass 2.0 is not compatible with LTTng analyses 0.5 and after. In this case, we suggest that you install LTTng analyses from the ``stable-0.4`` branch of the project's Git repository (see `Install from the Git repository`_). You can also `download `_ the latest 0.4 release tarball and follow the `Install from a release tarball`_ procedure. Required dependencies --------------------- - `Python `_ ≥ 3.4 - `setuptools `_ - `pyparsing `_ ≥ 2.0.0 - `Babeltrace `_ ≥ 1.2 with Python bindings (``--enable-python-bindings`` when building from source) Optional dependencies --------------------- - `LTTng `_ ≥ 2.5: to use the ``lttng-analyses-record`` script and to trace the system in general - `termcolor `_: color support - `progressbar `_: terminal progress bar support (this is not required for the machine interface's progress indication feature) Install from PyPI (online repository) ------------------------------------- To install the latest LTTng analyses release on your system from `PyPI `_: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. Make sure ``pip`` for Python 3 is installed on your system. The package is named ``python3-pip`` on most distributions (``python-pip`` on Arch Linux). #. Use ``pip3`` to install LTTng analyses: .. code-block:: bash sudo pip3 install --upgrade lttnganalyses Note that you can also install LTTng analyses locally, only for your user: .. code-block:: bash pip3 install --user --upgrade lttnganalyses Files are installed in ``~/.local``, therefore ``~/.local/bin`` must be part of your ``PATH`` environment variable for the LTTng analyses to be launchable. Install from a release tarball ------------------------------ To install a specific LTTng analyses release (tarball) on your system: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. `Download `_ and extract the desired release tarball. #. Use ``setup.py`` to install LTTng analyses: .. code-block:: bash sudo ./setup.py install Install from the Git repository ------------------------------- To install LTTng analyses from a specific branch or tag of the project's Git repository: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. Make sure ``pip`` for Python 3 is installed on your system. The package is named ``python3-pip`` on most distributions (``python-pip`` on Arch Linux). #. Use ``pip3`` to install LTTng analyses: .. code-block:: bash sudo pip3 install --upgrade git+git://github.com/lttng/lttng-analyses.git@master Replace ``master`` with the desired branch or tag name to install in the previous URL. Note that you can also install LTTng analyses locally, only for your user: .. code-block:: bash sudo pip3 install --user --upgrade git+git://github.com/lttng/lttng-analyses.git@master Files are installed in ``~/.local``, therefore ``~/.local/bin`` must be part of your ``PATH`` environment variable for the LTTng analyses to be launchable. Install on Ubuntu ----------------- To install LTTng analyses on Ubuntu ≥ 12.04: #. Add the *LTTng Latest Stable* PPA repository: .. code-block:: bash sudo apt-get install -y software-properties-common sudo apt-add-repository -y ppa:lttng/ppa sudo apt-get update Replace ``software-properties-common`` with ``python-software-properties`` on Ubuntu 12.04. #. Install the required dependencies: .. code-block:: bash sudo apt-get install -y babeltrace sudo apt-get install -y python3-babeltrace sudo apt-get install -y python3-setuptools On Ubuntu > 12.04: .. code-block:: bash sudo apt-get install -y python3-pyparsing On Ubuntu 12.04: .. code-block:: bash sudo pip3 install --upgrade pyparsing #. **Optional**: Install the optional dependencies: .. code-block:: bash sudo apt-get install -y lttng-tools sudo apt-get install -y lttng-modules-dkms sudo apt-get install -y python3-progressbar sudo apt-get install -y python3-termcolor #. Install LTTng analyses: .. code-block:: bash sudo apt-get install -y python3-lttnganalyses Install on Debian "sid" ----------------------- To install LTTng analyses on Debian "sid": #. Install the required dependencies: .. code-block:: bash sudo apt-get install -y babeltrace sudo apt-get install -y python3-babeltrace sudo apt-get install -y python3-setuptools sudo apt-get install -y python3-pyparsing #. **Optional**: Install the optional dependencies: .. code-block:: bash sudo apt-get install -y lttng-tools sudo apt-get install -y lttng-modules-dkms sudo apt-get install -y python3-progressbar sudo apt-get install -y python3-termcolor #. Install LTTng analyses: .. code-block:: bash sudo apt-get install -y python3-lttnganalyses Record a trace ============== This section is a quick reminder of how to record an LTTng kernel trace. See LTTng's `quick start guide `_ to familiarize with LTTng. Automatic --------- LTTng analyses ships with a handy (installed) script, ``lttng-analyses-record``, which automates the steps to record a kernel trace with the events required by the analyses. To use ``lttng-analyses-record``: #. Launch the installed script: .. code-block:: bash lttng-analyses-record #. Do whatever it takes for your problem to occur. #. When you are done recording, press Ctrl+C where the script is running. Manual ------ To record an LTTng kernel trace suitable for the LTTng analyses: #. Create a tracing session: .. code-block:: bash sudo lttng create #. Create a channel with a large sub-buffer size: .. code-block:: bash sudo lttng enable-channel --kernel chan --subbuf-size=8M #. Create event rules to capture the needed events: .. code-block:: bash sudo lttng enable-event --kernel --channel=chan block_bio_backmerge sudo lttng enable-event --kernel --channel=chan block_bio_remap sudo lttng enable-event --kernel --channel=chan block_rq_complete sudo lttng enable-event --kernel --channel=chan block_rq_issue sudo lttng enable-event --kernel --channel=chan irq_handler_entry sudo lttng enable-event --kernel --channel=chan irq_handler_exit sudo lttng enable-event --kernel --channel=chan irq_softirq_entry sudo lttng enable-event --kernel --channel=chan irq_softirq_exit sudo lttng enable-event --kernel --channel=chan irq_softirq_raise sudo lttng enable-event --kernel --channel=chan kmem_mm_page_alloc sudo lttng enable-event --kernel --channel=chan kmem_mm_page_free sudo lttng enable-event --kernel --channel=chan lttng_statedump_block_device sudo lttng enable-event --kernel --channel=chan lttng_statedump_file_descriptor sudo lttng enable-event --kernel --channel=chan lttng_statedump_process_state sudo lttng enable-event --kernel --channel=chan mm_page_alloc sudo lttng enable-event --kernel --channel=chan mm_page_free sudo lttng enable-event --kernel --channel=chan net_dev_xmit sudo lttng enable-event --kernel --channel=chan netif_receive_skb sudo lttng enable-event --kernel --channel=chan sched_pi_setprio sudo lttng enable-event --kernel --channel=chan sched_process_exec sudo lttng enable-event --kernel --channel=chan sched_process_fork sudo lttng enable-event --kernel --channel=chan sched_switch sudo lttng enable-event --kernel --channel=chan sched_wakeup sudo lttng enable-event --kernel --channel=chan sched_waking sudo lttng enable-event --kernel --channel=chan softirq_entry sudo lttng enable-event --kernel --channel=chan softirq_exit sudo lttng enable-event --kernel --channel=chan softirq_raise sudo lttng enable-event --kernel --channel=chan --syscall --all #. Start recording: .. code-block:: bash sudo lttng start #. Do whatever it takes for your problem to occur. #. Stop recording and destroy the tracing session to free its resources: .. code-block:: bash sudo lttng stop sudo lttng destroy See the `LTTng Documentation `_ for other use cases, like sending the trace data over the network instead of recording trace files on the target's file system. Run an LTTng analysis ===================== The **LTTng analyses** are a set of various command-line analyses. Each analysis accepts the path to a recorded trace (see `Record a trace`_) as its argument, as well as various command-line options to control the analysis and its output. Many command-line options are common to all the analyses, so that you can filter by timerange, process name, process ID, minimum and maximum values, and the rest. Also note that the reported timestamps can optionally be expressed in the GMT time zone. Each analysis is installed as an executable starting with the ``lttng-`` prefix. .. list-table:: Available LTTng analyses :header-rows: 1 * - Command - Description * - ``lttng-cputop`` - Per-TID, per-CPU, and total top CPU usage. * - ``lttng-iolatencyfreq`` - I/O request latency distribution. * - ``lttng-iolatencystats`` - Partition and system call latency statistics. * - ``lttng-iolatencytop`` - Top system call latencies. * - ``lttng-iolog`` - I/O operations log. * - ``lttng-iousagetop`` - I/O usage top. * - ``lttng-irqfreq`` - Interrupt handler duration frequency distribution. * - ``lttng-irqlog`` - Interrupt log. * - ``lttng-irqstats`` - Hardware and software interrupt statistics. * - ``lttng-memtop`` - Per-TID top allocated/freed memory. * - ``lttng-schedfreq`` - Scheduling latency frequency distribution. * - ``lttng-schedlog`` - Scheduling top. * - ``lttng-schedstats`` - Scheduling latency stats. * - ``lttng-schedtop`` - Scheduling top. * - ``lttng-periodlog`` - Period log. * - ``lttng-periodstats`` - Period duration stats. * - ``lttng-periodtop`` - Period duration top. * - ``lttng-periodfreq`` - Period duration frequency distribution. * - ``lttng-syscallstats`` - Per-TID and global system call statistics. Use the ``--help`` option of any command to list the descriptions of the possible command-line options. .. NOTE:: You can set the ``LTTNG_ANALYSES_DEBUG`` environment variable to ``1`` when you launch an analysis to enable a debug output. You can also use the general ``--debug`` option. Filtering options ----------------- Depending on the analysis, filter options are available. The complete list of filter options is: .. list-table:: Available filtering command-line options :header-rows: 1 * - Command-line option - Description * - ``--begin`` - Trace time at which to begin the analysis. Format: ``HH:MM:SS[.NNNNNNNNN]``. * - ``--cpu`` - Comma-delimited list of CPU IDs for which to display the results. * - ``--end`` - Trace time at which to end the analysis. Format: ``HH:MM:SS[.NNNNNNNNN]``. * - ``--irq`` - List of hardware IRQ numbers for which to display the results. * - ``--limit`` - Maximum number of output rows per table. This option is useful for "top" analyses, like ``lttng-cputop``. * - ``--min`` - Minimum duration (µs) to keep in results. * - ``--minsize`` - Minimum I/O operation size (B) to keep in results. * - ``--max`` - Maximum duration (µs) to keep in results. * - ``--maxsize`` - Maximum I/O operation size (B) to keep in results. * - ``--procname`` - Comma-delimited list of process names for which to display the results. * - ``--softirq`` - List of software IRQ numbers for which to display the results. * - ``--tid`` - Comma-delimited list of thread IDs for which to display the results. Period options -------------- LTTng analyses feature a powerful "period engine". A *period* is an interval which begins and ends under specific conditions. When the analysis results are displayed, they are isolated for the periods that were opened and closed during the process. A period can have a parent. If it's the case, then its parent needs to exist for the period to begin at all. This tree structure of periods is useful to keep a form of custom user state during the generic kernel analysis. .. ATTENTION:: The ``--period`` and ``--period-captures`` options's arguments include characters that are considered special by most shells, like ``$``, ``*``, and ``&``. Make sure to always **single-quote** those arguments when running the LTTng analyses on the command line. Period definition ~~~~~~~~~~~~~~~~~ You can define one or more periods on the command line, when launching an analysis, with the ``--period`` option. This option's argument accepts the following form (content within square brackets is optional):: [ NAME [ (PARENT) ] ] : BEGINEXPR [ : ENDEXPR ] ``NAME`` Optional name of the period definition. All periods opened from this definition have this name. The syntax of this name is the same as a C identifier. ``PARENT`` Optional name of a *previously defined* period which acts as the parent period definition of this definition. ``NAME`` must be set for ``PARENT`` to be set. ``BEGINEXPR`` Matching expression which a given event must match in order for an actual period to be instantiated by this definition. ``ENDEXPR`` Matching expression which a given event must match in order for an instance of this definition to be closed. If this part is omitted, ``BEGINEXPR`` is used for the ending expression too. Matching expression ................... A matching expression is a C-like logical expression. It supports nesting expressions with ``(`` and ``)``, as well as the ``&&`` (logical *AND*), ``||`` (logical *OR*), and ``!`` (logical *NOT*) operators. The precedence of those operators is the same as in the C language. The atomic operands in those logical expressions are comparisons. For the following comparison syntaxes, consider that: - ``EVT`` indicates an event source. The available event sources are: ``$evt`` Current event. ``$begin.$evt`` In ``BEGINEXPR``: current event (same as ``$evt``). In ``ENDEXPR``: event which, for this period instance, was matched when ``BEGINEXPR`` was evaluated. ``$parent.$begin.$evt`` Event which, for the parent period instance of this period instance, was matched when ``BEGINEXPR`` of the parent was evaluated. - ``FIELD`` indicates an event field source. The available event field sources are: ``NAME`` (direct field name) Automatic scope: try to find the field named ``NAME`` in the dynamic scopes in this order: #. Event payload #. Event context #. Event header #. Stream event context #. Packet context #. Packet header ``$payload.NAME`` Event payload field named ``NAME``. ``$ctx.NAME`` Event context field named ``NAME``. ``$header.NAME`` Event header field named ``NAME``. ``$stream_ctx.NAME`` Stream event context field named ``NAME``. ``$pkt_ctx.NAME`` Packet context field named ``NAME``. ``$pkt_header.NAME`` Packet header field named ``NAME``. - ``VALUE`` indicates one of: - A constant, decimal number. This can be an integer or a real number, positive or negative, and supports the ``e`` scientific notation. Examples: ``23``, ``-18.28``, ``7.2e9``. - A double-quoted literal string. ``"`` and ``\`` can be escaped with ``\``. Examples: ``"hello, world!"``, ``"here's another \"quoted\" string"``. - An event field, that is, ``EVT.FIELD``, considering the replacements described above. - ``NUMVALUE`` indicates one of: - A constant, decimal number. This can be an integer or a real number, positive or negative, and supports the ``e`` scientific notation. Examples: ``23``, ``-18.28``, ``7.2e9``. - An event field, that is, ``EVT.FIELD``, considering the replacements described above. .. list-table:: Available comparison syntaxes for matching expressions :header-rows: 1 * - Comparison syntax - Description * - #. ``EVT.$name == "NAME"`` #. ``EVT.$name != "NAME"`` #. ``EVT.$name =* "PATTERN"`` - Name matching: #. Name of event source ``EVT`` is equal to ``NAME``. #. Name of event source ``EVT`` is not equal to ``NAME``. #. Name of event source ``EVT`` satisfies the globbing pattern ``PATTERN`` (see `fnmatch `_). * - #. ``EVT.FIELD == VALUE`` #. ``EVT.FIELD != VALUE`` #. ``EVT.FIELD < NUMVALUE`` #. ``EVT.FIELD <= NUMVALUE`` #. ``EVT.FIELD > NUMVALUE`` #. ``EVT.FIELD >= NUMVALUE`` #. ``EVT.FIELD =* "PATTERN"`` - Value matching: #. The value of the field ``EVT.FIELD`` is equal to the value ``VALUE``. #. The value of the field ``EVT.FIELD`` is not equal to the value ``VALUE``. #. The value of the field ``EVT.FIELD`` is lesser than the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is lesser than or equal to the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is greater than the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is greater than or equal to the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` satisfies the globbing pattern ``PATTERN`` (see `fnmatch `_). In any case, if ``EVT.FIELD`` does not target an existing field, the comparison including it fails. Also, string fields cannot be compared to number values (constant or fields). Examples ........ - Create a period instance named ``switch`` when: - The current event name is ``sched_switch``. End this period instance when: - The current event name is ``sched_switch``. Period definition:: switch : $evt.$name == "sched_switch" - Create a period instance named ``switch`` when: - The current event name is ``sched_switch`` *AND* - The current event's ``next_tid`` field is *NOT* equal to 0. End this period instance when: - The current event name is ``sched_switch`` *AND* - The current event's ``prev_tid`` field is equal to the ``next_tid`` field of the matched event in the begin expression *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression. Period definition:: switch : $evt.$name == "sched_switch" && $evt.next_tid != 0 : $evt.$name == "sched_switch" && $evt.prev_tid == $begin.$evt.next_tid && $evt.cpu_id == $begin.$evt.cpu_id - Create a period instance named ``irq`` when: - A parent period instance named ``switch`` is currently opened. - The current event name satisfies the ``irq_*_entry`` globbing pattern *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression of the parent period instance. End this period instance when: - The current event name is ``irq_handler_exit`` *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression. Period definition:: irq(switch) : $evt.$name =* "irq_*_entry" && $evt.cpu_id == $parent.$begin.$evt.cpu_id : $evt.$name == "irq_handler_exit" && $evt.cpu_id == $begin.$evt.cpu_id - Create a period instance named ``hello`` when: - The current event name satisfies the ``hello*`` globbing pattern, but excludes ``hello world``. End this period instance when: - The current event name is the same as the name of the matched event in the begin expression *AND* - The current event's ``theid`` header field is lesser than or equal to 231. Period definition:: hello : $evt.$name =* "hello*" && $evt.$name != "hello world" : $evt.$name == $begin.$evt.$name && $evt.$header.theid <= 231 Period captures ~~~~~~~~~~~~~~~ When a period instance begins or ends, the analysis can capture the current values of specific event fields and display them in its results. You can set period captures with the ``--period-captures`` command-line option. This option's argument accepts the following form (content within square brackets is optional):: NAME : BEGINCAPTURES [ : ENDCAPTURES ] ``NAME`` Name of period instances on which to apply those captures. A ``--period`` option in the same command line must define this name. ``BEGINCAPTURES`` Comma-delimited list of event fields to capture when the beginning expression of the period definition named ``NAME`` is matched. ``ENDCAPTURES`` Comma-delimited list of event fields to capture when the ending expression of the period definition named ``NAME`` is matched. If this part is omitted, there are no end captures. The format of ``BEGINCAPTURES`` and ``ENDCAPTURES`` is a comma-delimited list of tokens having this format:: [ CAPTURENAME = ] EVT.FIELD or:: [ CAPTURENAME = ] EVT.$name ``CAPTURENAME`` Custom name for this capture. The syntax of this name is the same as a C identifier. If this part is omitted, the literal expression used for ``EVT.FIELD`` is used. ``EVT`` and ``FIELD`` See `Matching expression`_. Period select and aggregate parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ With ``lttng-periodlog``, it is possible to see the list of periods in the context of their parent. By specifying the ``--aggregate-by``, the lines in the log present on the same line the timerange of the period specified by the ``--select`` argument at the timerange of the parent period that contains it. In ``lttng-periodstats`` and ``lttng-periodfreq``, these two flags are used as filter to limit the output to only the relevant periods. If omitted, all existing combinations of parent/child statistics and frequency distributions are output. Grouping ~~~~~~~~ When fields are captured during the period analyses, it is possible to compute the statistics and frequency distribution grouped by values of the these fields, instead of globally for the trace. The format is:: --group-by "PERIODNAME.CAPTURENAME[, PERIODNAME.CAPTURENAME]" If multiple values are passed, the analysis outputs one list of tables (statistics and/or frequency distribution) for each unique combination of the field's values. For example, if we track the ``open`` system call and we are interested in the average duration of this call by filename, we only have to capture the filename field and group the results by ``open.filename``. Examples ........ Begin captures only:: switch : $evt.next_tid, name = $evt.$name, msg_id = $parent.$begin.$evt.id Begin and end captures:: hello : beginning = $evt.$ctx.begin_ts, $evt.received_bytes : $evt.send_bytes, $evt.$name, begin = $begin.$evt.$ctx.begin_ts end = $evt.$ctx.end_ts Top scheduling latency (delay between ``sched_waking(tid=$TID)`` and ``sched_switch(next_tid=$TID)``) with recording of the procname of the waker (dependant of the ``procname`` context in the trace), priority and target CPU: .. code-block:: bash lttng-periodtop /path/to/trace \ --period 'wake : $evt.$name == "sched_waking" : $evt.$name == "sched_switch" && $evt.next_tid == $begin.$evt.$payload.tid' \ --period-capture 'wake : waker = $evt.procname, prio = $evt.prio : wakee = $evt.next_comm, cpu = $evt.cpu_id' :: Timerange: [2016-07-21 17:07:47.832234248, 2016-07-21 17:07:48.948152659] Period top Begin End Duration (us) Name Begin capture End capture [17:07:47.835338581, 17:07:47.946834976] 111496.395 wake waker = lttng-consumerd wakee = kworker/0:2 prio = 20 cpu = 0 [17:07:47.850409057, 17:07:47.946829256] 96420.199 wake waker = swapper/2 wakee = migration/0 prio = -100 cpu = 0 [17:07:48.300313282, 17:07:48.300993892] 680.610 wake waker = Xorg wakee = ibus-ui-gtk3 prio = 20 cpu = 3 [17:07:48.300330060, 17:07:48.300920648] 590.588 wake waker = Xorg wakee = ibus-x11 prio = 20 cpu = 3 Log of all the IRQ handled while a user-space process was running, capture the procname of the process interrupted, the name and number of the IRQ: .. code-block:: bash lttng-periodlog /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" && $evt.next_tid != 0 : $evt.$name == "sched_switch" && $evt.prev_tid == $begin.$evt.next_tid && $evt.cpu_id == $begin.$evt.cpu_id' \ --period 'irq(switch) : $evt.$name == "irq_handler_entry" && $evt.cpu_id == $parent.$begin.$evt.cpu_id : $evt.$name == "irq_handler_exit" && $evt.cpu_id == $begin.$evt.cpu_id' \ --period-capture 'irq : name = $evt.name, irq = $evt.irq, current = $parent.$begin.$evt.next_comm' :: Period log Begin End Duration (us) Name Begin capture End capture [10:58:26.169238875, 10:58:26.169244920] 6.045 switch [10:58:26.169598385, 10:58:26.169602967] 4.582 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.169811553, 10:58:26.169816218] 4.665 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.170025600, 10:58:26.170030197] 4.597 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.169236842, 10:58:26.170105711] 868.869 switch Log of all the ``open`` system call periods aggregated by the ``sched_switch`` in which they occurred: .. code-block:: bash lttng-periodlog /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" : $evt.$name == "sched_switch" && $begin.$evt.next_tid == $evt.prev_tid && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'open(switch) : $evt.$name == "syscall_entry_open" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "syscall_exit_open" && $begin.$evt.cpu_id == $evt.cpu_id' \ --period-captures 'switch : comm = $evt.next_comm, cpu = $evt.cpu_id, tid = $evt.next_tid' \ --period-captures 'open : filename = $evt.filename : fd = $evt.ret' \ --select open --aggregate-by switch :: Aggregated log Aggregation of (open) by switch Parent | | Durations (us) | Begin End Duration (us) Name | Child name Count | Min Avg Max Stdev Runtime | Parent captures [10:58:26.222823677, 10:58:26.224039381] 1215.704 switch | switch/open 3 | 7.517 9.548 11.248 1.887 28.644 | switch.comm = bash, switch.cpu = 3, switch.tid = 12420 [10:58:26.856224058, 10:58:26.856589867] 365.809 switch | switch/open 1 | 77.620 77.620 77.620 ? 77.620 | switch.comm = ntpd, switch.cpu = 0, switch.tid = 11132 [10:58:27.000068031, 10:58:27.000954859] 886.828 switch | switch/open 15 | 9.224 16.126 37.190 6.681 241.894 | switch.comm = irqbalance, switch.cpu = 0, switch.tid = 1656 [10:58:27.225474282, 10:58:27.229160014] 3685.732 switch | switch/open 22 | 5.797 6.767 9.308 0.972 148.881 | switch.comm = bash, switch.cpu = 1, switch.tid = 12421 Statistics about the memory allocation performed within an ``open`` system call within a single ``sched_switch`` (no blocking or preemption): .. code-block:: bash lttng-periodstats /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" : $evt.$name == "sched_switch" && $begin.$evt.next_tid == $evt.prev_tid && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'open(switch) : $evt.$name == "syscall_entry_open" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "syscall_exit_open" && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'alloc(open) : $evt.$name == "kmem_cache_alloc" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "kmem_cache_free" && $evt.ptr == $begin.$evt.ptr' \ --period-captures 'switch : comm = $evt.next_comm, cpu = $evt.cpu_id, tid = $evt.next_tid' \ --period-captures 'open : filename = $evt.filename : fd = $evt.ret' \ --period-captures 'alloc : ptr = $evt.ptr' :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Period tree: switch |-- open |-- alloc Period statistics (us) Period Count Min Avg Max Stdev Runtime switch 831 2.824 5233.363 172056.802 16197.531 4348924.614 switch/open 41 5.797 12.123 77.620 12.076 497.039 switch/open/alloc 44 1.152 10.277 74.476 11.582 452.175 Per-parent period duration statistics (us) With active children Period Parent Min Avg Max Stdev switch/open switch 28.644 124.260 241.894 92.667 switch/open/alloc switch 24.036 113.044 229.713 87.827 switch/open/alloc switch/open 4.550 11.029 74.476 11.768 Per-parent duration ratio (%) With active children Period Parent Min Avg Max Stdev switch/open switch 2 13.723 27 12.421 switch/open/alloc switch 1 12.901 25 12.041 switch/open/alloc switch/open 76 88.146 115 7.529 Per-parent period count statistics With active children Period Parent Min Avg Max Stdev switch/open switch 1 10.250 22 9.979 switch/open/alloc switch 1 11.000 22 10.551 switch/open/alloc switch/open 1 1.073 2 0.264 Per-parent period duration statistics (us) Globally Period Parent Min Avg Max Stdev switch/open switch 0.000 0.598 241.894 10.251 switch/open/alloc switch 0.000 0.544 229.713 9.443 switch/open/alloc switch/open 4.550 11.029 74.476 11.768 Per-parent duration ratio (%) Globally Period Parent Min Avg Max Stdev switch/open switch 0 0.066 27 1.209 switch/open/alloc switch 0 0.062 25 1.150 switch/open/alloc switch/open 76 88.146 115 7.529 Per-parent period count statistics Globally Period Parent Min Avg Max Stdev switch/open switch 0 0.049 22 0.929 switch/open/alloc switch 0 0.053 22 0.991 switch/open/alloc switch/open 1 1.073 2 0.264 These statistics can also be scoped by value of the FD returned by the ``open`` system, by appending ``--group-by "open.fd"`` to the previous command line. That way previous tables will be output for each value of FD returned, so it is possible to observe the behaviour based on the parameters of a system call. Using the ``lttng-periodfreq`` or the ``--freq`` parameter, these tables can also be presented as frequency distributions. Progress options ---------------- If the `progressbar `_ optional dependency is installed, a progress bar is available to indicate the progress of the analysis. By default, the progress bar is based on the current event's timestamp. Progress options are: .. list-table:: Available progress command-line options :header-rows: 1 * - Command-line option - Description * - ``--no-progress`` - Disable the progress bar. * - ``--progress-use-size`` - Use the approximate event size instead of the current event's timestamp to estimate the progress value. Machine interface ----------------- If you want to display LTTng analyses results in a custom viewer, you can use the JSON-based LTTng analyses machine interface (LAMI). Each command in the previous table has its corresponding LAMI version with the ``-mi`` suffix. For example, the LAMI version of ``lttng-cputop`` is ``lttng-cputop-mi``. This version of LTTng analyses conforms to `LAMI 1.0 `_. Examples ======== This section shows a few examples of using some LTTng analyses. I/O --- Partition and system call latency statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencystats /path/to/trace :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Syscalls latency statistics (usec): Type Count Min Average Max Stdev ----------------------------------------------------------------------------------------- Open 45 5.562 13.835 77.683 15.263 Read 109 0.316 5.774 62.569 9.277 Write 101 0.256 7.060 48.531 8.555 Sync 207 19.384 40.664 160.188 21.201 Disk latency statistics (usec): Name Count Min Average Max Stdev ----------------------------------------------------------------------------------------- dm-0 108 0.001 0.004 0.007 1.306 I/O request latency distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencyfreq /path/to/trace :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Open latency distribution (usec) ############################################################################### 5.562 ███████████████████████████████████████████████████████████████████ 25 9.168 ██████████ 4 12.774 █████████████████████ 8 16.380 ████████ 3 19.986 █████ 2 23.592 0 27.198 0 30.804 0 34.410 ██ 1 38.016 0 41.623 0 45.229 0 48.835 0 52.441 0 56.047 0 59.653 0 63.259 0 66.865 0 70.471 0 74.077 █████ 2 Top system call latencies ~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencytop /path/to/trace --limit=3 --minsize=2 :: Checking the trace for lost events... Timerange: [2015-01-15 12:18:37.216484041, 2015-01-15 12:18:53.821580313] Top open syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:50.432950815,12:18:50.870648568] open 437697.753 N/A apache2 31517 /var/lib/php5/sess_0ifir2hangm8ggaljdphl9o5b5 (fd=13) [12:18:52.946080165,12:18:52.946132278] open 52.113 N/A apache2 31588 /var/lib/php5/sess_mr9045p1k55vin1h0vg7rhgd63 (fd=13) [12:18:46.800846035,12:18:46.800874916] open 28.881 N/A apache2 31591 /var/lib/php5/sess_r7c12pccfvjtas15g3j69u14h0 (fd=13) [12:18:51.389797604,12:18:51.389824426] open 26.822 N/A apache2 31520 /var/lib/php5/sess_4sdb1rtjkhb78sabnoj8gpbl00 (fd=13) Top read syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:37.256073107,12:18:37.256555967] read 482.860 7.00 B bash 10237 unknown (origin not found) (fd=3) [12:18:52.000209798,12:18:52.000252304] read 42.506 1.00 KB irqbalance 1337 /proc/interrupts (fd=3) [12:18:37.256559439,12:18:37.256601615] read 42.176 5.00 B bash 10237 unknown (origin not found) (fd=3) [12:18:42.000281918,12:18:42.000320016] read 38.098 1.00 KB irqbalance 1337 /proc/interrupts (fd=3) Top write syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:49.913241516,12:18:49.915908862] write 2667.346 95.00 B apache2 31584 /var/log/apache2/access.log (fd=8) [12:18:37.472823631,12:18:37.472859836] writev 36.205 21.97 KB apache2 31544 unknown (origin not found) (fd=12) [12:18:37.991578372,12:18:37.991612724] writev 34.352 21.97 KB apache2 31589 unknown (origin not found) (fd=12) [12:18:39.547778549,12:18:39.547812515] writev 33.966 21.97 KB apache2 31584 unknown (origin not found) (fd=12) Top sync syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:50.162776739,12:18:51.157522361] sync 994745.622 N/A sync 22791 None (fd=None) [12:18:37.227867532,12:18:37.232289687] sync_file_range 4422.155 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) [12:18:37.238076585,12:18:37.239012027] sync_file_range 935.442 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) [12:18:37.220974711,12:18:37.221647124] sync_file_range 672.413 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) I/O operations log ~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolog /path/to/trace :: [10:58:26.221618530,10:58:26.221620659] write 2.129 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.221623609,10:58:26.221628055] read 4.446 50.00 B /usr/bin/x-term 11793 /dev/ptmx (fd=24) [10:58:26.221638929,10:58:26.221640008] write 1.079 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.221676232,10:58:26.221677385] read 1.153 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.223401804,10:58:26.223411683] open 9.879 N/A sleep 12420 /etc/ld.so.cache (fd=3) [10:58:26.223448060,10:58:26.223455577] open 7.517 N/A sleep 12420 /lib/x86_64-linux-gnu/libc.so.6 (fd=3) [10:58:26.223456522,10:58:26.223458898] read 2.376 832.00 B sleep 12420 /lib/x86_64-linux-gnu/libc.so.6 (fd=3) [10:58:26.223918068,10:58:26.223929316] open 11.248 N/A sleep 12420 (fd=3) [10:58:26.231881565,10:58:26.231895970] writev 14.405 16.00 B /usr/bin/x-term 11793 socket:[45650] (fd=4) [10:58:26.231979636,10:58:26.231988446] recvmsg 8.810 16.00 B Xorg 1827 socket:[47480] (fd=38) I/O usage top ~~~~~~~~~~~~~ .. code-block:: bash lttng-iousagetop /path/to/trace :: Timerange: [2014-10-07 16:36:00.733214969, 2014-10-07 16:36:18.804584183] Per-process I/O Read ############################################################################### ██████████████████████████████████████████████████ 16.00 MB lttng-consumerd (2619) 0 B file 4.00 B net 16.00 MB unknown █████ 1.72 MB lttng-consumerd (2619) 0 B file 0 B net 1.72 MB unknown █ 398.13 KB postgres (4219) 121.05 KB file 277.07 KB net 8.00 B unknown 256.09 KB postgres (1348) 0 B file 255.97 KB net 117.00 B unknown 204.81 KB postgres (4218) 204.81 KB file 0 B net 0 B unknown 123.77 KB postgres (4220) 117.50 KB file 6.26 KB net 8.00 B unknown Per-process I/O Write ############################################################################### ██████████████████████████████████████████████████ 16.00 MB lttng-consumerd (2619) 0 B file 8.00 MB net 8.00 MB unknown ██████ 2.20 MB postgres (4219) 2.00 MB file 202.23 KB net 0 B unknown █████ 1.73 MB lttng-consumerd (2619) 0 B file 887.73 KB net 882.58 KB unknown ██ 726.33 KB postgres (1165) 8.00 KB file 6.33 KB net 712.00 KB unknown 158.69 KB postgres (1168) 158.69 KB file 0 B net 0 B unknown 80.66 KB postgres (1348) 0 B file 80.66 KB net 0 B unknown Files Read ############################################################################### ██████████████████████████████████████████████████ 8.00 MB anon_inode:[lttng_stream] (lttng-consumerd) 'fd 32 in lttng-consumerd (2619)' █████ 834.41 KB base/16384/pg_internal.init 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' █ 256.09 KB socket:[8893] (postgres) 'fd 9 in postgres (1348)' █ 174.69 KB pg_stat_tmp/pgstat.stat 'fd 9 in postgres (4218)', 'fd 9 in postgres (1167)' 109.48 KB global/pg_internal.init 'fd 7 in postgres (4218)', 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' 104.30 KB base/11951/pg_internal.init 'fd 7 in postgres (4218)' 12.85 KB socket (lttng-sessiond) 'fd 30 in lttng-sessiond (384)' 4.50 KB global/pg_filenode.map 'fd 7 in postgres (4218)', 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' 4.16 KB socket (postgres) 'fd 9 in postgres (4226)' 4.00 KB /proc/interrupts 'fd 3 in irqbalance (1104)' Files Write ############################################################################### ██████████████████████████████████████████████████ 8.00 MB socket:[56371] (lttng-consumerd) 'fd 30 in lttng-consumerd (2619)' █████████████████████████████████████████████████ 8.00 MB pipe:[53306] (lttng-consumerd) 'fd 12 in lttng-consumerd (2619)' ██████████ 1.76 MB pg_xlog/00000001000000000000000B 'fd 31 in postgres (4219)' █████ 887.82 KB socket:[56369] (lttng-consumerd) 'fd 26 in lttng-consumerd (2619)' █████ 882.58 KB pipe:[53309] (lttng-consumerd) 'fd 18 in lttng-consumerd (2619)' 160.00 KB /var/lib/postgresql/9.1/main/base/16384/16602 'fd 14 in postgres (1165)' 158.69 KB pg_stat_tmp/pgstat.tmp 'fd 3 in postgres (1168)' 144.00 KB /var/lib/postgresql/9.1/main/base/16384/16613 'fd 12 in postgres (1165)' 88.00 KB /var/lib/postgresql/9.1/main/base/16384/16609 'fd 11 in postgres (1165)' 78.28 KB socket:[8893] (postgres) 'fd 9 in postgres (1348)' Block I/O Read ############################################################################### Block I/O Write ############################################################################### ██████████████████████████████████████████████████ 1.76 MB postgres (pid=4219) ████ 160.00 KB postgres (pid=1168) ██ 100.00 KB kworker/u8:0 (pid=1540) ██ 96.00 KB jbd2/vda1-8 (pid=257) █ 40.00 KB postgres (pid=1166) 8.00 KB kworker/u9:0 (pid=4197) 4.00 KB kworker/u9:2 (pid=1381) Disk nr_sector ############################################################################### ███████████████████████████████████████████████████████████████████ 4416.00 sectors vda1 Disk nr_requests ############################################################################### ████████████████████████████████████████████████████████████████████ 177.00 requests vda1 Disk request time/sector ############################################################################### ██████████████████████████████████████████████████████████████████ 0.01 ms vda1 Network recv_bytes ############################################################################### ███████████████████████████████████████████████████████ 739.50 KB eth0 █████ 80.27 KB lo Network sent_bytes ############################################################################### ████████████████████████████████████████████████████████ 9.36 MB eth0 System calls -------- Per-TID and global system call statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-syscallstats /path/to/trace :: Timerange: [2015-01-15 12:18:37.216484041, 2015-01-15 12:18:53.821580313] Per-TID syscalls statistics (usec) find (22785) Count Min Average Max Stdev Return values - getdents 14240 0.380 364.301 43372.450 1629.390 {'success': 14240} - close 14236 0.233 0.506 4.932 0.217 {'success': 14236} - fchdir 14231 0.252 0.407 5.769 0.117 {'success': 14231} - open 7123 0.779 2.321 12.697 0.936 {'success': 7119, 'ENOENT': 4} - newfstatat 7118 1.457 143.562 28103.532 1410.281 {'success': 7118} - openat 7118 1.525 2.411 9.107 0.771 {'success': 7118} - newfstat 7117 0.272 0.654 8.707 0.248 {'success': 7117} - write 573 0.298 0.715 8.584 0.391 {'success': 573} - brk 27 0.615 5.768 30.792 7.830 {'success': 27} - rt_sigaction 22 0.227 0.283 0.589 0.098 {'success': 22} - mmap 12 1.116 2.116 3.597 0.762 {'success': 12} - mprotect 6 1.185 2.235 3.923 1.148 {'success': 6} - read 5 0.925 2.101 6.300 2.351 {'success': 5} - ioctl 4 0.342 1.151 2.280 0.873 {'success': 2, 'ENOTTY': 2} - access 4 1.166 2.530 4.202 1.527 {'ENOENT': 4} - rt_sigprocmask 3 0.325 0.570 0.979 0.357 {'success': 3} - dup2 2 0.250 0.562 0.874 ? {'success': 2} - munmap 2 3.006 5.399 7.792 ? {'success': 2} - execve 1 7277.974 7277.974 7277.974 ? {'success': 1} - setpgid 1 0.945 0.945 0.945 ? {'success': 1} - fcntl 1 ? 0.000 0.000 ? {} - newuname 1 1.240 1.240 1.240 ? {'success': 1} Total: 71847 ----------------------------------------------------------------------------------------------------------------- apache2 (31517) Count Min Average Max Stdev Return values - fcntl 192 ? 0.000 0.000 ? {} - newfstat 156 0.237 0.484 1.102 0.222 {'success': 156} - read 144 0.307 1.602 16.307 1.698 {'success': 117, 'EAGAIN': 27} - access 96 0.705 1.580 3.364 0.670 {'success': 12, 'ENOENT': 84} - newlstat 84 0.459 0.738 1.456 0.186 {'success': 63, 'ENOENT': 21} - newstat 74 0.735 2.266 11.212 1.772 {'success': 50, 'ENOENT': 24} - lseek 72 0.317 0.522 0.915 0.112 {'success': 72} - close 39 0.471 0.615 0.867 0.069 {'success': 39} - open 36 2.219 12162.689 437697.753 72948.868 {'success': 36} - getcwd 28 0.287 0.701 1.331 0.277 {'success': 28} - poll 27 1.080 1139.669 2851.163 856.723 {'success': 27} - times 24 0.765 0.956 1.327 0.107 {'success': 24} - setitimer 24 0.499 5.848 16.668 4.041 {'success': 24} - write 24 5.467 6.784 16.827 2.459 {'success': 24} - writev 24 10.241 17.645 29.817 5.116 {'success': 24} - mmap 15 3.060 3.482 4.406 0.317 {'success': 15} - munmap 15 2.944 3.502 4.154 0.427 {'success': 15} - brk 12 0.738 4.579 13.795 4.437 {'success': 12} - chdir 12 0.989 1.600 2.353 0.385 {'success': 12} - flock 6 0.906 1.282 2.043 0.423 {'success': 6} - rt_sigaction 6 0.530 0.725 1.123 0.217 {'success': 6} - pwrite64 6 1.262 1.430 1.692 0.143 {'success': 6} - rt_sigprocmask 6 0.539 0.650 0.976 0.162 {'success': 6} - shutdown 3 7.323 8.487 10.281 1.576 {'success': 3} - getsockname 3 1.015 1.228 1.585 0.311 {'success': 3} - accept4 3 5174453.611 3450157.282 5176018.235 ? {'success': 2} Total: 1131 Interrupts ---------- Hardware and software interrupt statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-irqstats /path/to/trace :: Timerange: [2014-03-11 16:05:41.314824752, 2014-03-11 16:05:45.041994298] Hard IRQ Duration (us) count min avg max stdev ----------------------------------------------------------------------------------| 1: 30 10.901 45.500 64.510 18.447 | 42: 259 3.203 7.863 21.426 3.183 | 43: 2 3.859 3.976 4.093 0.165 | 44: 92 0.300 3.995 6.542 2.181 | Soft IRQ Duration (us) Raise latency (us) count min avg max stdev | count min avg max stdev ----------------------------------------------------------------------------------|------------------------------------------------------------ 1: 495 0.202 21.058 51.060 11.047 | 53 2.141 11.217 20.005 7.233 3: 14 0.133 9.177 32.774 10.483 | 14 0.763 3.703 10.902 3.448 4: 257 5.981 29.064 125.862 15.891 | 257 0.891 3.104 15.054 2.046 6: 26 0.309 1.198 1.748 0.329 | 26 9.636 39.222 51.430 11.246 7: 299 1.185 14.768 90.465 15.992 | 298 1.286 31.387 61.700 11.866 9: 338 0.592 3.387 13.745 1.356 | 147 2.480 29.299 64.453 14.286 Interrupt handler duration frequency distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-irqfreq --timerange=[16:05:42,16:05:45] --irq=44 --stats /path/to/trace :: Timerange: [2014-03-11 16:05:42.042034570, 2014-03-11 16:05:44.998914297] Hard IRQ Duration (us) count min avg max stdev ----------------------------------------------------------------------------------| 44: 72 0.300 4.018 6.542 2.164 | Frequency distribution iwlwifi (44) ############################################################################### 0.300 █████ 1.00 0.612 ██████████████████████████████████████████████████████████████ 12.00 0.924 ████████████████████ 4.00 1.236 ██████████ 2.00 1.548 0.00 1.861 █████ 1.00 2.173 0.00 2.485 █████ 1.00 2.797 ██████████████████████████ 5.00 3.109 █████ 1.00 3.421 ███████████████ 3.00 3.733 0.00 4.045 █████ 1.00 4.357 █████ 1.00 4.669 ██████████ 2.00 4.981 ██████████ 2.00 5.294 █████████████████████████████████████████ 8.00 5.606 ████████████████████████████████████████████████████████████████████ 13.00 5.918 ██████████████████████████████████████████████████████████████ 12.00 6.230 ███████████████ 3.00 Community ========= LTTng analyses is part of the `LTTng `_ project and shares its community. We hope you have fun trying this project and please remember it is a work in progress; feedback, bug reports and improvement ideas are always welcome! .. list-table:: LTTng analyses project's communication channels :header-rows: 1 * - Item - Location - Notes * - Mailing list - `lttng-dev `_ (``lttng-dev@lists.lttng.org``) - Preferably, use the ``[lttng-analyses]`` subject prefix * - IRC - ``#lttng`` on the OFTC network - * - Code contribution - Create a new GitHub `pull request `_ - * - Bug reporting - Create a new GitHub `issue `_ - * - Continuous integration - `lttng-analyses_master_build item `_ on LTTng's CI and `lttng/lttng-analyses project `_ on Travis CI - * - Blog - The `LTTng blog `_ contains some posts about LTTng analyses - lttnganalyses-0.6.1/versioneer.py0000664000175000017500000017201212553274232020621 0ustar mjeansonmjeanson00000000000000 # Version: 0.15 """ The Versioneer ============== * like a rocketeer, but for versions! * https://github.com/warner/python-versioneer * Brian Warner * License: Public Domain * Compatible With: python2.6, 2.7, 3.2, 3.3, 3.4, and pypy * [![Latest Version] (https://pypip.in/version/versioneer/badge.svg?style=flat) ](https://pypi.python.org/pypi/versioneer/) * [![Build Status] (https://travis-ci.org/warner/python-versioneer.png?branch=master) ](https://travis-ci.org/warner/python-versioneer) This is a tool for managing a recorded version number in distutils-based python projects. The goal is to remove the tedious and error-prone "update the embedded version string" step from your release process. Making a new release should be as easy as recording a new tag in your version-control system, and maybe making new tarballs. ## Quick Install * `pip install versioneer` to somewhere to your $PATH * add a `[versioneer]` section to your setup.cfg (see below) * run `versioneer install` in your source tree, commit the results ## Version Identifiers Source trees come from a variety of places: * a version-control system checkout (mostly used by developers) * a nightly tarball, produced by build automation * a snapshot tarball, produced by a web-based VCS browser, like github's "tarball from tag" feature * a release tarball, produced by "setup.py sdist", distributed through PyPI Within each source tree, the version identifier (either a string or a number, this tool is format-agnostic) can come from a variety of places: * ask the VCS tool itself, e.g. "git describe" (for checkouts), which knows about recent "tags" and an absolute revision-id * the name of the directory into which the tarball was unpacked * an expanded VCS keyword ($Id$, etc) * a `_version.py` created by some earlier build step For released software, the version identifier is closely related to a VCS tag. Some projects use tag names that include more than just the version string (e.g. "myproject-1.2" instead of just "1.2"), in which case the tool needs to strip the tag prefix to extract the version identifier. For unreleased software (between tags), the version identifier should provide enough information to help developers recreate the same tree, while also giving them an idea of roughly how old the tree is (after version 1.2, before version 1.3). Many VCS systems can report a description that captures this, for example `git describe --tags --dirty --always` reports things like "0.7-1-g574ab98-dirty" to indicate that the checkout is one revision past the 0.7 tag, has a unique revision id of "574ab98", and is "dirty" (it has uncommitted changes. The version identifier is used for multiple purposes: * to allow the module to self-identify its version: `myproject.__version__` * to choose a name and prefix for a 'setup.py sdist' tarball ## Theory of Operation Versioneer works by adding a special `_version.py` file into your source tree, where your `__init__.py` can import it. This `_version.py` knows how to dynamically ask the VCS tool for version information at import time. `_version.py` also contains `$Revision$` markers, and the installation process marks `_version.py` to have this marker rewritten with a tag name during the `git archive` command. As a result, generated tarballs will contain enough information to get the proper version. To allow `setup.py` to compute a version too, a `versioneer.py` is added to the top level of your source tree, next to `setup.py` and the `setup.cfg` that configures it. This overrides several distutils/setuptools commands to compute the version when invoked, and changes `setup.py build` and `setup.py sdist` to replace `_version.py` with a small static file that contains just the generated version data. ## Installation First, decide on values for the following configuration variables: * `VCS`: the version control system you use. Currently accepts "git". * `style`: the style of version string to be produced. See "Styles" below for details. Defaults to "pep440", which looks like `TAG[+DISTANCE.gSHORTHASH[.dirty]]`. * `versionfile_source`: A project-relative pathname into which the generated version strings should be written. This is usually a `_version.py` next to your project's main `__init__.py` file, so it can be imported at runtime. If your project uses `src/myproject/__init__.py`, this should be `src/myproject/_version.py`. This file should be checked in to your VCS as usual: the copy created below by `setup.py setup_versioneer` will include code that parses expanded VCS keywords in generated tarballs. The 'build' and 'sdist' commands will replace it with a copy that has just the calculated version string. This must be set even if your project does not have any modules (and will therefore never import `_version.py`), since "setup.py sdist" -based trees still need somewhere to record the pre-calculated version strings. Anywhere in the source tree should do. If there is a `__init__.py` next to your `_version.py`, the `setup.py setup_versioneer` command (described below) will append some `__version__`-setting assignments, if they aren't already present. * `versionfile_build`: Like `versionfile_source`, but relative to the build directory instead of the source directory. These will differ when your setup.py uses 'package_dir='. If you have `package_dir={'myproject': 'src/myproject'}`, then you will probably have `versionfile_build='myproject/_version.py'` and `versionfile_source='src/myproject/_version.py'`. If this is set to None, then `setup.py build` will not attempt to rewrite any `_version.py` in the built tree. If your project does not have any libraries (e.g. if it only builds a script), then you should use `versionfile_build = None` and override `distutils.command.build_scripts` to explicitly insert a copy of `versioneer.get_version()` into your generated script. * `tag_prefix`: a string, like 'PROJECTNAME-', which appears at the start of all VCS tags. If your tags look like 'myproject-1.2.0', then you should use tag_prefix='myproject-'. If you use unprefixed tags like '1.2.0', this should be an empty string. * `parentdir_prefix`: a optional string, frequently the same as tag_prefix, which appears at the start of all unpacked tarball filenames. If your tarball unpacks into 'myproject-1.2.0', this should be 'myproject-'. To disable this feature, just omit the field from your `setup.cfg`. This tool provides one script, named `versioneer`. That script has one mode, "install", which writes a copy of `versioneer.py` into the current directory and runs `versioneer.py setup` to finish the installation. To versioneer-enable your project: * 1: Modify your `setup.cfg`, adding a section named `[versioneer]` and populating it with the configuration values you decided earlier (note that the option names are not case-sensitive): ```` [versioneer] VCS = git style = pep440 versionfile_source = src/myproject/_version.py versionfile_build = myproject/_version.py tag_prefix = "" parentdir_prefix = myproject- ```` * 2: Run `versioneer install`. This will do the following: * copy `versioneer.py` into the top of your source tree * create `_version.py` in the right place (`versionfile_source`) * modify your `__init__.py` (if one exists next to `_version.py`) to define `__version__` (by calling a function from `_version.py`) * modify your `MANIFEST.in` to include both `versioneer.py` and the generated `_version.py` in sdist tarballs `versioneer install` will complain about any problems it finds with your `setup.py` or `setup.cfg`. Run it multiple times until you have fixed all the problems. * 3: add a `import versioneer` to your setup.py, and add the following arguments to the setup() call: version=versioneer.get_version(), cmdclass=versioneer.get_cmdclass(), * 4: commit these changes to your VCS. To make sure you won't forget, `versioneer install` will mark everything it touched for addition using `git add`. Don't forget to add `setup.py` and `setup.cfg` too. ## Post-Installation Usage Once established, all uses of your tree from a VCS checkout should get the current version string. All generated tarballs should include an embedded version string (so users who unpack them will not need a VCS tool installed). If you distribute your project through PyPI, then the release process should boil down to two steps: * 1: git tag 1.0 * 2: python setup.py register sdist upload If you distribute it through github (i.e. users use github to generate tarballs with `git archive`), the process is: * 1: git tag 1.0 * 2: git push; git push --tags Versioneer will report "0+untagged.NUMCOMMITS.gHASH" until your tree has at least one tag in its history. ## Version-String Flavors Code which uses Versioneer can learn about its version string at runtime by importing `_version` from your main `__init__.py` file and running the `get_versions()` function. From the "outside" (e.g. in `setup.py`), you can import the top-level `versioneer.py` and run `get_versions()`. Both functions return a dictionary with different flavors of version information: * `['version']`: A condensed version string, rendered using the selected style. This is the most commonly used value for the project's version string. The default "pep440" style yields strings like `0.11`, `0.11+2.g1076c97`, or `0.11+2.g1076c97.dirty`. See the "Styles" section below for alternative styles. * `['full-revisionid']`: detailed revision identifier. For Git, this is the full SHA1 commit id, e.g. "1076c978a8d3cfc70f408fe5974aa6c092c949ac". * `['dirty']`: a boolean, True if the tree has uncommitted changes. Note that this is only accurate if run in a VCS checkout, otherwise it is likely to be False or None * `['error']`: if the version string could not be computed, this will be set to a string describing the problem, otherwise it will be None. It may be useful to throw an exception in setup.py if this is set, to avoid e.g. creating tarballs with a version string of "unknown". Some variants are more useful than others. Including `full-revisionid` in a bug report should allow developers to reconstruct the exact code being tested (or indicate the presence of local changes that should be shared with the developers). `version` is suitable for display in an "about" box or a CLI `--version` output: it can be easily compared against release notes and lists of bugs fixed in various releases. The installer adds the following text to your `__init__.py` to place a basic version in `YOURPROJECT.__version__`: from ._version import get_versions __version__ = get_versions()['version'] del get_versions ## Styles The setup.cfg `style=` configuration controls how the VCS information is rendered into a version string. The default style, "pep440", produces a PEP440-compliant string, equal to the un-prefixed tag name for actual releases, and containing an additional "local version" section with more detail for in-between builds. For Git, this is TAG[+DISTANCE.gHEX[.dirty]] , using information from `git describe --tags --dirty --always`. For example "0.11+2.g1076c97.dirty" indicates that the tree is like the "1076c97" commit but has uncommitted changes (".dirty"), and that this commit is two revisions ("+2") beyond the "0.11" tag. For released software (exactly equal to a known tag), the identifier will only contain the stripped tag, e.g. "0.11". Other styles are available. See details.md in the Versioneer source tree for descriptions. ## Debugging Versioneer tries to avoid fatal errors: if something goes wrong, it will tend to return a version of "0+unknown". To investigate the problem, run `setup.py version`, which will run the version-lookup code in a verbose mode, and will display the full contents of `get_versions()` (including the `error` string, which may help identify what went wrong). ## Updating Versioneer To upgrade your project to a new release of Versioneer, do the following: * install the new Versioneer (`pip install -U versioneer` or equivalent) * edit `setup.cfg`, if necessary, to include any new configuration settings indicated by the release notes * re-run `versioneer install` in your source tree, to replace `SRC/_version.py` * commit any changed files ### Upgrading to 0.15 Starting with this version, Versioneer is configured with a `[versioneer]` section in your `setup.cfg` file. Earlier versions required the `setup.py` to set attributes on the `versioneer` module immediately after import. The new version will refuse to run (raising an exception during import) until you have provided the necessary `setup.cfg` section. In addition, the Versioneer package provides an executable named `versioneer`, and the installation process is driven by running `versioneer install`. In 0.14 and earlier, the executable was named `versioneer-installer` and was run without an argument. ### Upgrading to 0.14 0.14 changes the format of the version string. 0.13 and earlier used hyphen-separated strings like "0.11-2-g1076c97-dirty". 0.14 and beyond use a plus-separated "local version" section strings, with dot-separated components, like "0.11+2.g1076c97". PEP440-strict tools did not like the old format, but should be ok with the new one. ### Upgrading from 0.11 to 0.12 Nothing special. ### Upgrading from 0.10 to 0.11 You must add a `versioneer.VCS = "git"` to your `setup.py` before re-running `setup.py setup_versioneer`. This will enable the use of additional version-control systems (SVN, etc) in the future. ## Future Directions This tool is designed to make it easily extended to other version-control systems: all VCS-specific components are in separate directories like src/git/ . The top-level `versioneer.py` script is assembled from these components by running make-versioneer.py . In the future, make-versioneer.py will take a VCS name as an argument, and will construct a version of `versioneer.py` that is specific to the given VCS. It might also take the configuration arguments that are currently provided manually during installation by editing setup.py . Alternatively, it might go the other direction and include code from all supported VCS systems, reducing the number of intermediate scripts. ## License To make Versioneer easier to embed, all its code is hereby released into the public domain. The `_version.py` that it creates is also in the public domain. """ from __future__ import print_function try: import configparser except ImportError: import ConfigParser as configparser import errno import json import os import re import subprocess import sys class VersioneerConfig: pass def get_root(): # we require that all commands are run from the project root, i.e. the # directory that contains setup.py, setup.cfg, and versioneer.py . root = os.path.realpath(os.path.abspath(os.getcwd())) setup_py = os.path.join(root, "setup.py") versioneer_py = os.path.join(root, "versioneer.py") if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)): # allow 'python path/to/setup.py COMMAND' root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0]))) setup_py = os.path.join(root, "setup.py") versioneer_py = os.path.join(root, "versioneer.py") if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)): err = ("Versioneer was unable to run the project root directory. " "Versioneer requires setup.py to be executed from " "its immediate directory (like 'python setup.py COMMAND'), " "or in a way that lets it use sys.argv[0] to find the root " "(like 'python path/to/setup.py COMMAND').") raise VersioneerBadRootError(err) try: # Certain runtime workflows (setup.py install/develop in a setuptools # tree) execute all dependencies in a single python process, so # "versioneer" may be imported multiple times, and python's shared # module-import table will cache the first one. So we can't use # os.path.dirname(__file__), as that will find whichever # versioneer.py was first imported, even in later projects. me = os.path.realpath(os.path.abspath(__file__)) if os.path.splitext(me)[0] != os.path.splitext(versioneer_py)[0]: print("Warning: build in %s is using versioneer.py from %s" % (os.path.dirname(me), versioneer_py)) except NameError: pass return root def get_config_from_root(root): # This might raise EnvironmentError (if setup.cfg is missing), or # configparser.NoSectionError (if it lacks a [versioneer] section), or # configparser.NoOptionError (if it lacks "VCS="). See the docstring at # the top of versioneer.py for instructions on writing your setup.cfg . setup_cfg = os.path.join(root, "setup.cfg") parser = configparser.SafeConfigParser() with open(setup_cfg, "r") as f: parser.readfp(f) VCS = parser.get("versioneer", "VCS") # mandatory def get(parser, name): if parser.has_option("versioneer", name): return parser.get("versioneer", name) return None cfg = VersioneerConfig() cfg.VCS = VCS cfg.style = get(parser, "style") or "" cfg.versionfile_source = get(parser, "versionfile_source") cfg.versionfile_build = get(parser, "versionfile_build") cfg.tag_prefix = get(parser, "tag_prefix") cfg.parentdir_prefix = get(parser, "parentdir_prefix") cfg.verbose = get(parser, "verbose") return cfg class NotThisMethod(Exception): pass # these dictionaries contain VCS-specific tools LONG_VERSION_PY = {} HANDLERS = {} def register_vcs_handler(vcs, method): # decorator def decorate(f): if vcs not in HANDLERS: HANDLERS[vcs] = {} HANDLERS[vcs][method] = f return f return decorate def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False): assert isinstance(commands, list) p = None for c in commands: try: dispcmd = str([c] + args) # remember shell=False, so use git.cmd on windows, not just git p = subprocess.Popen([c] + args, cwd=cwd, stdout=subprocess.PIPE, stderr=(subprocess.PIPE if hide_stderr else None)) break except EnvironmentError: e = sys.exc_info()[1] if e.errno == errno.ENOENT: continue if verbose: print("unable to run %s" % dispcmd) print(e) return None else: if verbose: print("unable to find command, tried %s" % (commands,)) return None stdout = p.communicate()[0].strip() if sys.version_info[0] >= 3: stdout = stdout.decode() if p.returncode != 0: if verbose: print("unable to run %s (error)" % dispcmd) return None return stdout LONG_VERSION_PY['git'] = ''' # This file helps to compute a version number in source trees obtained from # git-archive tarball (such as those provided by githubs download-from-tag # feature). Distribution tarballs (built by setup.py sdist) and build # directories (produced by setup.py build) will contain a much shorter file # that just contains the computed version number. # This file is released into the public domain. Generated by # versioneer-0.15 (https://github.com/warner/python-versioneer) import errno import os import re import subprocess import sys def get_keywords(): # these strings will be replaced by git during git-archive. # setup.py/versioneer.py will grep for the variable names, so they must # each be defined on a line of their own. _version.py will just call # get_keywords(). git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s" git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s" keywords = {"refnames": git_refnames, "full": git_full} return keywords class VersioneerConfig: pass def get_config(): # these strings are filled in when 'setup.py versioneer' creates # _version.py cfg = VersioneerConfig() cfg.VCS = "git" cfg.style = "%(STYLE)s" cfg.tag_prefix = "%(TAG_PREFIX)s" cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s" cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s" cfg.verbose = False return cfg class NotThisMethod(Exception): pass LONG_VERSION_PY = {} HANDLERS = {} def register_vcs_handler(vcs, method): # decorator def decorate(f): if vcs not in HANDLERS: HANDLERS[vcs] = {} HANDLERS[vcs][method] = f return f return decorate def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False): assert isinstance(commands, list) p = None for c in commands: try: dispcmd = str([c] + args) # remember shell=False, so use git.cmd on windows, not just git p = subprocess.Popen([c] + args, cwd=cwd, stdout=subprocess.PIPE, stderr=(subprocess.PIPE if hide_stderr else None)) break except EnvironmentError: e = sys.exc_info()[1] if e.errno == errno.ENOENT: continue if verbose: print("unable to run %%s" %% dispcmd) print(e) return None else: if verbose: print("unable to find command, tried %%s" %% (commands,)) return None stdout = p.communicate()[0].strip() if sys.version_info[0] >= 3: stdout = stdout.decode() if p.returncode != 0: if verbose: print("unable to run %%s (error)" %% dispcmd) return None return stdout def versions_from_parentdir(parentdir_prefix, root, verbose): # Source tarballs conventionally unpack into a directory that includes # both the project name and a version string. dirname = os.path.basename(root) if not dirname.startswith(parentdir_prefix): if verbose: print("guessing rootdir is '%%s', but '%%s' doesn't start with " "prefix '%%s'" %% (root, dirname, parentdir_prefix)) raise NotThisMethod("rootdir doesn't start with parentdir_prefix") return {"version": dirname[len(parentdir_prefix):], "full-revisionid": None, "dirty": False, "error": None} @register_vcs_handler("git", "get_keywords") def git_get_keywords(versionfile_abs): # the code embedded in _version.py can just fetch the value of these # keywords. When used from setup.py, we don't want to import _version.py, # so we do it with a regexp instead. This function is not used from # _version.py. keywords = {} try: f = open(versionfile_abs, "r") for line in f.readlines(): if line.strip().startswith("git_refnames ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["refnames"] = mo.group(1) if line.strip().startswith("git_full ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["full"] = mo.group(1) f.close() except EnvironmentError: pass return keywords @register_vcs_handler("git", "keywords") def git_versions_from_keywords(keywords, tag_prefix, verbose): if not keywords: raise NotThisMethod("no keywords at all, weird") refnames = keywords["refnames"].strip() if refnames.startswith("$Format"): if verbose: print("keywords are unexpanded, not using") raise NotThisMethod("unexpanded keywords, not a git-archive tarball") refs = set([r.strip() for r in refnames.strip("()").split(",")]) # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of # just "foo-1.0". If we see a "tag: " prefix, prefer those. TAG = "tag: " tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) if not tags: # Either we're using git < 1.8.3, or there really are no tags. We use # a heuristic: assume all version tags have a digit. The old git %%d # expansion behaves like git log --decorate=short and strips out the # refs/heads/ and refs/tags/ prefixes that would let us distinguish # between branches and tags. By ignoring refnames without digits, we # filter out many common branch names like "release" and # "stabilization", as well as "HEAD" and "master". tags = set([r for r in refs if re.search(r'\d', r)]) if verbose: print("discarding '%%s', no digits" %% ",".join(refs-tags)) if verbose: print("likely tags: %%s" %% ",".join(sorted(tags))) for ref in sorted(tags): # sorting will prefer e.g. "2.0" over "2.0rc1" if ref.startswith(tag_prefix): r = ref[len(tag_prefix):] if verbose: print("picking %%s" %% r) return {"version": r, "full-revisionid": keywords["full"].strip(), "dirty": False, "error": None } # no suitable tags, so version is "0+unknown", but full hex is still there if verbose: print("no suitable tags, using unknown + full revision id") return {"version": "0+unknown", "full-revisionid": keywords["full"].strip(), "dirty": False, "error": "no suitable tags"} @register_vcs_handler("git", "pieces_from_vcs") def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): # this runs 'git' from the root of the source tree. This only gets called # if the git-archive 'subst' keywords were *not* expanded, and # _version.py hasn't already been rewritten with a short version string, # meaning we're inside a checked out source tree. if not os.path.exists(os.path.join(root, ".git")): if verbose: print("no .git in %%s" %% root) raise NotThisMethod("no .git directory") GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] # if there is a tag, this yields TAG-NUM-gHEX[-dirty] # if there are no tags, this yields HEX[-dirty] (no NUM) describe_out = run_command(GITS, ["describe", "--tags", "--dirty", "--always", "--long"], cwd=root) # --long was added in git-1.5.5 if describe_out is None: raise NotThisMethod("'git describe' failed") describe_out = describe_out.strip() full_out = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) if full_out is None: raise NotThisMethod("'git rev-parse' failed") full_out = full_out.strip() pieces = {} pieces["long"] = full_out pieces["short"] = full_out[:7] # maybe improved later pieces["error"] = None # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] # TAG might have hyphens. git_describe = describe_out # look for -dirty suffix dirty = git_describe.endswith("-dirty") pieces["dirty"] = dirty if dirty: git_describe = git_describe[:git_describe.rindex("-dirty")] # now we have TAG-NUM-gHEX or HEX if "-" in git_describe: # TAG-NUM-gHEX mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) if not mo: # unparseable. Maybe git-describe is misbehaving? pieces["error"] = ("unable to parse git-describe output: '%%s'" %% describe_out) return pieces # tag full_tag = mo.group(1) if not full_tag.startswith(tag_prefix): if verbose: fmt = "tag '%%s' doesn't start with prefix '%%s'" print(fmt %% (full_tag, tag_prefix)) pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'" %% (full_tag, tag_prefix)) return pieces pieces["closest-tag"] = full_tag[len(tag_prefix):] # distance: number of commits since tag pieces["distance"] = int(mo.group(2)) # commit: short hex revision ID pieces["short"] = mo.group(3) else: # HEX: no tags pieces["closest-tag"] = None count_out = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root) pieces["distance"] = int(count_out) # total number of commits return pieces def plus_or_dot(pieces): if "+" in pieces.get("closest-tag", ""): return "." return "+" def render_pep440(pieces): # now build up version string, with post-release "local version # identifier". Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you # get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty # exceptions: # 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += plus_or_dot(pieces) rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" else: # exception #1 rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" return rendered def render_pep440_pre(pieces): # TAG[.post.devDISTANCE] . No -dirty # exceptions: # 1: no tags. 0.post.devDISTANCE if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += ".post.dev%%d" %% pieces["distance"] else: # exception #1 rendered = "0.post.dev%%d" %% pieces["distance"] return rendered def render_pep440_post(pieces): # TAG[.postDISTANCE[.dev0]+gHEX] . The ".dev0" means dirty. Note that # .dev0 sorts backwards (a dirty tree will appear "older" than the # corresponding clean one), but you shouldn't be releasing software with # -dirty anyways. # exceptions: # 1: no tags. 0.postDISTANCE[.dev0] if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += plus_or_dot(pieces) rendered += "g%%s" %% pieces["short"] else: # exception #1 rendered = "0.post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += "+g%%s" %% pieces["short"] return rendered def render_pep440_old(pieces): # TAG[.postDISTANCE[.dev0]] . The ".dev0" means dirty. # exceptions: # 1: no tags. 0.postDISTANCE[.dev0] if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" else: # exception #1 rendered = "0.post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" return rendered def render_git_describe(pieces): # TAG[-DISTANCE-gHEX][-dirty], like 'git describe --tags --dirty # --always' # exceptions: # 1: no tags. HEX[-dirty] (note: no 'g' prefix) if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render_git_describe_long(pieces): # TAG-DISTANCE-gHEX[-dirty], like 'git describe --tags --dirty # --always -long'. The distance/hash is unconditional. # exceptions: # 1: no tags. HEX[-dirty] (note: no 'g' prefix) if pieces["closest-tag"]: rendered = pieces["closest-tag"] rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render(pieces, style): if pieces["error"]: return {"version": "unknown", "full-revisionid": pieces.get("long"), "dirty": None, "error": pieces["error"]} if not style or style == "default": style = "pep440" # the default if style == "pep440": rendered = render_pep440(pieces) elif style == "pep440-pre": rendered = render_pep440_pre(pieces) elif style == "pep440-post": rendered = render_pep440_post(pieces) elif style == "pep440-old": rendered = render_pep440_old(pieces) elif style == "git-describe": rendered = render_git_describe(pieces) elif style == "git-describe-long": rendered = render_git_describe_long(pieces) else: raise ValueError("unknown style '%%s'" %% style) return {"version": rendered, "full-revisionid": pieces["long"], "dirty": pieces["dirty"], "error": None} def get_versions(): # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have # __file__, we can work backwards from there to the root. Some # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which # case we can only use expanded keywords. cfg = get_config() verbose = cfg.verbose try: return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose) except NotThisMethod: pass try: root = os.path.realpath(__file__) # versionfile_source is the relative path from the top of the source # tree (where the .git directory might live) to this file. Invert # this to find the root from __file__. for i in cfg.versionfile_source.split('/'): root = os.path.dirname(root) except NameError: return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to find root of source tree"} try: pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose) return render(pieces, cfg.style) except NotThisMethod: pass try: if cfg.parentdir_prefix: return versions_from_parentdir(cfg.parentdir_prefix, root, verbose) except NotThisMethod: pass return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to compute version"} ''' @register_vcs_handler("git", "get_keywords") def git_get_keywords(versionfile_abs): # the code embedded in _version.py can just fetch the value of these # keywords. When used from setup.py, we don't want to import _version.py, # so we do it with a regexp instead. This function is not used from # _version.py. keywords = {} try: f = open(versionfile_abs, "r") for line in f.readlines(): if line.strip().startswith("git_refnames ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["refnames"] = mo.group(1) if line.strip().startswith("git_full ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["full"] = mo.group(1) f.close() except EnvironmentError: pass return keywords @register_vcs_handler("git", "keywords") def git_versions_from_keywords(keywords, tag_prefix, verbose): if not keywords: raise NotThisMethod("no keywords at all, weird") refnames = keywords["refnames"].strip() if refnames.startswith("$Format"): if verbose: print("keywords are unexpanded, not using") raise NotThisMethod("unexpanded keywords, not a git-archive tarball") refs = set([r.strip() for r in refnames.strip("()").split(",")]) # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of # just "foo-1.0". If we see a "tag: " prefix, prefer those. TAG = "tag: " tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) if not tags: # Either we're using git < 1.8.3, or there really are no tags. We use # a heuristic: assume all version tags have a digit. The old git %d # expansion behaves like git log --decorate=short and strips out the # refs/heads/ and refs/tags/ prefixes that would let us distinguish # between branches and tags. By ignoring refnames without digits, we # filter out many common branch names like "release" and # "stabilization", as well as "HEAD" and "master". tags = set([r for r in refs if re.search(r'\d', r)]) if verbose: print("discarding '%s', no digits" % ",".join(refs-tags)) if verbose: print("likely tags: %s" % ",".join(sorted(tags))) for ref in sorted(tags): # sorting will prefer e.g. "2.0" over "2.0rc1" if ref.startswith(tag_prefix): r = ref[len(tag_prefix):] if verbose: print("picking %s" % r) return {"version": r, "full-revisionid": keywords["full"].strip(), "dirty": False, "error": None } # no suitable tags, so version is "0+unknown", but full hex is still there if verbose: print("no suitable tags, using unknown + full revision id") return {"version": "0+unknown", "full-revisionid": keywords["full"].strip(), "dirty": False, "error": "no suitable tags"} @register_vcs_handler("git", "pieces_from_vcs") def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): # this runs 'git' from the root of the source tree. This only gets called # if the git-archive 'subst' keywords were *not* expanded, and # _version.py hasn't already been rewritten with a short version string, # meaning we're inside a checked out source tree. if not os.path.exists(os.path.join(root, ".git")): if verbose: print("no .git in %s" % root) raise NotThisMethod("no .git directory") GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] # if there is a tag, this yields TAG-NUM-gHEX[-dirty] # if there are no tags, this yields HEX[-dirty] (no NUM) describe_out = run_command(GITS, ["describe", "--tags", "--dirty", "--always", "--long"], cwd=root) # --long was added in git-1.5.5 if describe_out is None: raise NotThisMethod("'git describe' failed") describe_out = describe_out.strip() full_out = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) if full_out is None: raise NotThisMethod("'git rev-parse' failed") full_out = full_out.strip() pieces = {} pieces["long"] = full_out pieces["short"] = full_out[:7] # maybe improved later pieces["error"] = None # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] # TAG might have hyphens. git_describe = describe_out # look for -dirty suffix dirty = git_describe.endswith("-dirty") pieces["dirty"] = dirty if dirty: git_describe = git_describe[:git_describe.rindex("-dirty")] # now we have TAG-NUM-gHEX or HEX if "-" in git_describe: # TAG-NUM-gHEX mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) if not mo: # unparseable. Maybe git-describe is misbehaving? pieces["error"] = ("unable to parse git-describe output: '%s'" % describe_out) return pieces # tag full_tag = mo.group(1) if not full_tag.startswith(tag_prefix): if verbose: fmt = "tag '%s' doesn't start with prefix '%s'" print(fmt % (full_tag, tag_prefix)) pieces["error"] = ("tag '%s' doesn't start with prefix '%s'" % (full_tag, tag_prefix)) return pieces pieces["closest-tag"] = full_tag[len(tag_prefix):] # distance: number of commits since tag pieces["distance"] = int(mo.group(2)) # commit: short hex revision ID pieces["short"] = mo.group(3) else: # HEX: no tags pieces["closest-tag"] = None count_out = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root) pieces["distance"] = int(count_out) # total number of commits return pieces def do_vcs_install(manifest_in, versionfile_source, ipy): GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] files = [manifest_in, versionfile_source] if ipy: files.append(ipy) try: me = __file__ if me.endswith(".pyc") or me.endswith(".pyo"): me = os.path.splitext(me)[0] + ".py" versioneer_file = os.path.relpath(me) except NameError: versioneer_file = "versioneer.py" files.append(versioneer_file) present = False try: f = open(".gitattributes", "r") for line in f.readlines(): if line.strip().startswith(versionfile_source): if "export-subst" in line.strip().split()[1:]: present = True f.close() except EnvironmentError: pass if not present: f = open(".gitattributes", "a+") f.write("%s export-subst\n" % versionfile_source) f.close() files.append(".gitattributes") run_command(GITS, ["add", "--"] + files) def versions_from_parentdir(parentdir_prefix, root, verbose): # Source tarballs conventionally unpack into a directory that includes # both the project name and a version string. dirname = os.path.basename(root) if not dirname.startswith(parentdir_prefix): if verbose: print("guessing rootdir is '%s', but '%s' doesn't start with " "prefix '%s'" % (root, dirname, parentdir_prefix)) raise NotThisMethod("rootdir doesn't start with parentdir_prefix") return {"version": dirname[len(parentdir_prefix):], "full-revisionid": None, "dirty": False, "error": None} SHORT_VERSION_PY = """ # This file was generated by 'versioneer.py' (0.15) from # revision-control system data, or from the parent directory name of an # unpacked source archive. Distribution tarballs contain a pre-generated copy # of this file. import json import sys version_json = ''' %s ''' # END VERSION_JSON def get_versions(): return json.loads(version_json) """ def versions_from_file(filename): try: with open(filename) as f: contents = f.read() except EnvironmentError: raise NotThisMethod("unable to read _version.py") mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON", contents, re.M | re.S) if not mo: raise NotThisMethod("no version_json in _version.py") return json.loads(mo.group(1)) def write_to_version_file(filename, versions): os.unlink(filename) contents = json.dumps(versions, sort_keys=True, indent=1, separators=(",", ": ")) with open(filename, "w") as f: f.write(SHORT_VERSION_PY % contents) print("set %s to '%s'" % (filename, versions["version"])) def plus_or_dot(pieces): if "+" in pieces.get("closest-tag", ""): return "." return "+" def render_pep440(pieces): # now build up version string, with post-release "local version # identifier". Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you # get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty # exceptions: # 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += plus_or_dot(pieces) rendered += "%d.g%s" % (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" else: # exception #1 rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" return rendered def render_pep440_pre(pieces): # TAG[.post.devDISTANCE] . No -dirty # exceptions: # 1: no tags. 0.post.devDISTANCE if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += ".post.dev%d" % pieces["distance"] else: # exception #1 rendered = "0.post.dev%d" % pieces["distance"] return rendered def render_pep440_post(pieces): # TAG[.postDISTANCE[.dev0]+gHEX] . The ".dev0" means dirty. Note that # .dev0 sorts backwards (a dirty tree will appear "older" than the # corresponding clean one), but you shouldn't be releasing software with # -dirty anyways. # exceptions: # 1: no tags. 0.postDISTANCE[.dev0] if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += plus_or_dot(pieces) rendered += "g%s" % pieces["short"] else: # exception #1 rendered = "0.post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += "+g%s" % pieces["short"] return rendered def render_pep440_old(pieces): # TAG[.postDISTANCE[.dev0]] . The ".dev0" means dirty. # exceptions: # 1: no tags. 0.postDISTANCE[.dev0] if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" else: # exception #1 rendered = "0.post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" return rendered def render_git_describe(pieces): # TAG[-DISTANCE-gHEX][-dirty], like 'git describe --tags --dirty # --always' # exceptions: # 1: no tags. HEX[-dirty] (note: no 'g' prefix) if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render_git_describe_long(pieces): # TAG-DISTANCE-gHEX[-dirty], like 'git describe --tags --dirty # --always -long'. The distance/hash is unconditional. # exceptions: # 1: no tags. HEX[-dirty] (note: no 'g' prefix) if pieces["closest-tag"]: rendered = pieces["closest-tag"] rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render(pieces, style): if pieces["error"]: return {"version": "unknown", "full-revisionid": pieces.get("long"), "dirty": None, "error": pieces["error"]} if not style or style == "default": style = "pep440" # the default if style == "pep440": rendered = render_pep440(pieces) elif style == "pep440-pre": rendered = render_pep440_pre(pieces) elif style == "pep440-post": rendered = render_pep440_post(pieces) elif style == "pep440-old": rendered = render_pep440_old(pieces) elif style == "git-describe": rendered = render_git_describe(pieces) elif style == "git-describe-long": rendered = render_git_describe_long(pieces) else: raise ValueError("unknown style '%s'" % style) return {"version": rendered, "full-revisionid": pieces["long"], "dirty": pieces["dirty"], "error": None} class VersioneerBadRootError(Exception): pass def get_versions(verbose=False): # returns dict with two keys: 'version' and 'full' if "versioneer" in sys.modules: # see the discussion in cmdclass.py:get_cmdclass() del sys.modules["versioneer"] root = get_root() cfg = get_config_from_root(root) assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg" handlers = HANDLERS.get(cfg.VCS) assert handlers, "unrecognized VCS '%s'" % cfg.VCS verbose = verbose or cfg.verbose assert cfg.versionfile_source is not None, \ "please set versioneer.versionfile_source" assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix" versionfile_abs = os.path.join(root, cfg.versionfile_source) # extract version from first of: _version.py, VCS command (e.g. 'git # describe'), parentdir. This is meant to work for developers using a # source checkout, for users of a tarball created by 'setup.py sdist', # and for users of a tarball/zipball created by 'git archive' or github's # download-from-tag feature or the equivalent in other VCSes. get_keywords_f = handlers.get("get_keywords") from_keywords_f = handlers.get("keywords") if get_keywords_f and from_keywords_f: try: keywords = get_keywords_f(versionfile_abs) ver = from_keywords_f(keywords, cfg.tag_prefix, verbose) if verbose: print("got version from expanded keyword %s" % ver) return ver except NotThisMethod: pass try: ver = versions_from_file(versionfile_abs) if verbose: print("got version from file %s %s" % (versionfile_abs, ver)) return ver except NotThisMethod: pass from_vcs_f = handlers.get("pieces_from_vcs") if from_vcs_f: try: pieces = from_vcs_f(cfg.tag_prefix, root, verbose) ver = render(pieces, cfg.style) if verbose: print("got version from VCS %s" % ver) return ver except NotThisMethod: pass try: if cfg.parentdir_prefix: ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose) if verbose: print("got version from parentdir %s" % ver) return ver except NotThisMethod: pass if verbose: print("unable to compute version") return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to compute version"} def get_version(): return get_versions()["version"] def get_cmdclass(): if "versioneer" in sys.modules: del sys.modules["versioneer"] # this fixes the "python setup.py develop" case (also 'install' and # 'easy_install .'), in which subdependencies of the main project are # built (using setup.py bdist_egg) in the same python process. Assume # a main project A and a dependency B, which use different versions # of Versioneer. A's setup.py imports A's Versioneer, leaving it in # sys.modules by the time B's setup.py is executed, causing B to run # with the wrong versioneer. Setuptools wraps the sub-dep builds in a # sandbox that restores sys.modules to it's pre-build state, so the # parent is protected against the child's "import versioneer". By # removing ourselves from sys.modules here, before the child build # happens, we protect the child from the parent's versioneer too. # Also see https://github.com/warner/python-versioneer/issues/52 cmds = {} # we add "version" to both distutils and setuptools from distutils.core import Command class cmd_version(Command): description = "report generated version string" user_options = [] boolean_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): vers = get_versions(verbose=True) print("Version: %s" % vers["version"]) print(" full-revisionid: %s" % vers.get("full-revisionid")) print(" dirty: %s" % vers.get("dirty")) if vers["error"]: print(" error: %s" % vers["error"]) cmds["version"] = cmd_version # we override "build_py" in both distutils and setuptools # # most invocation pathways end up running build_py: # distutils/build -> build_py # distutils/install -> distutils/build ->.. # setuptools/bdist_wheel -> distutils/install ->.. # setuptools/bdist_egg -> distutils/install_lib -> build_py # setuptools/install -> bdist_egg ->.. # setuptools/develop -> ? from distutils.command.build_py import build_py as _build_py class cmd_build_py(_build_py): def run(self): root = get_root() cfg = get_config_from_root(root) versions = get_versions() _build_py.run(self) # now locate _version.py in the new build/ directory and replace # it with an updated value if cfg.versionfile_build: target_versionfile = os.path.join(self.build_lib, cfg.versionfile_build) print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, versions) cmds["build_py"] = cmd_build_py if "cx_Freeze" in sys.modules: # cx_freeze enabled? from cx_Freeze.dist import build_exe as _build_exe class cmd_build_exe(_build_exe): def run(self): root = get_root() cfg = get_config_from_root(root) versions = get_versions() target_versionfile = cfg.versionfile_source print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, versions) _build_exe.run(self) os.unlink(target_versionfile) with open(cfg.versionfile_source, "w") as f: LONG = LONG_VERSION_PY[cfg.VCS] f.write(LONG % {"DOLLAR": "$", "STYLE": cfg.style, "TAG_PREFIX": cfg.tag_prefix, "PARENTDIR_PREFIX": cfg.parentdir_prefix, "VERSIONFILE_SOURCE": cfg.versionfile_source, }) cmds["build_exe"] = cmd_build_exe del cmds["build_py"] # we override different "sdist" commands for both environments if "setuptools" in sys.modules: from setuptools.command.sdist import sdist as _sdist else: from distutils.command.sdist import sdist as _sdist class cmd_sdist(_sdist): def run(self): versions = get_versions() self._versioneer_generated_versions = versions # unless we update this, the command will keep using the old # version self.distribution.metadata.version = versions["version"] return _sdist.run(self) def make_release_tree(self, base_dir, files): root = get_root() cfg = get_config_from_root(root) _sdist.make_release_tree(self, base_dir, files) # now locate _version.py in the new base_dir directory # (remembering that it may be a hardlink) and replace it with an # updated value target_versionfile = os.path.join(base_dir, cfg.versionfile_source) print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, self._versioneer_generated_versions) cmds["sdist"] = cmd_sdist return cmds CONFIG_ERROR = """ setup.cfg is missing the necessary Versioneer configuration. You need a section like: [versioneer] VCS = git style = pep440 versionfile_source = src/myproject/_version.py versionfile_build = myproject/_version.py tag_prefix = "" parentdir_prefix = myproject- You will also need to edit your setup.py to use the results: import versioneer setup(version=versioneer.get_version(), cmdclass=versioneer.get_cmdclass(), ...) Please read the docstring in ./versioneer.py for configuration instructions, edit setup.cfg, and re-run the installer or 'python versioneer.py setup'. """ SAMPLE_CONFIG = """ # See the docstring in versioneer.py for instructions. Note that you must # re-run 'versioneer.py setup' after changing this section, and commit the # resulting files. [versioneer] #VCS = git #style = pep440 #versionfile_source = #versionfile_build = #tag_prefix = #parentdir_prefix = """ INIT_PY_SNIPPET = """ from ._version import get_versions __version__ = get_versions()['version'] del get_versions """ def do_setup(): root = get_root() try: cfg = get_config_from_root(root) except (EnvironmentError, configparser.NoSectionError, configparser.NoOptionError) as e: if isinstance(e, (EnvironmentError, configparser.NoSectionError)): print("Adding sample versioneer config to setup.cfg", file=sys.stderr) with open(os.path.join(root, "setup.cfg"), "a") as f: f.write(SAMPLE_CONFIG) print(CONFIG_ERROR, file=sys.stderr) return 1 print(" creating %s" % cfg.versionfile_source) with open(cfg.versionfile_source, "w") as f: LONG = LONG_VERSION_PY[cfg.VCS] f.write(LONG % {"DOLLAR": "$", "STYLE": cfg.style, "TAG_PREFIX": cfg.tag_prefix, "PARENTDIR_PREFIX": cfg.parentdir_prefix, "VERSIONFILE_SOURCE": cfg.versionfile_source, }) ipy = os.path.join(os.path.dirname(cfg.versionfile_source), "__init__.py") if os.path.exists(ipy): try: with open(ipy, "r") as f: old = f.read() except EnvironmentError: old = "" if INIT_PY_SNIPPET not in old: print(" appending to %s" % ipy) with open(ipy, "a") as f: f.write(INIT_PY_SNIPPET) else: print(" %s unmodified" % ipy) else: print(" %s doesn't exist, ok" % ipy) ipy = None # Make sure both the top-level "versioneer.py" and versionfile_source # (PKG/_version.py, used by runtime code) are in MANIFEST.in, so # they'll be copied into source distributions. Pip won't be able to # install the package without this. manifest_in = os.path.join(root, "MANIFEST.in") simple_includes = set() try: with open(manifest_in, "r") as f: for line in f: if line.startswith("include "): for include in line.split()[1:]: simple_includes.add(include) except EnvironmentError: pass # That doesn't cover everything MANIFEST.in can do # (http://docs.python.org/2/distutils/sourcedist.html#commands), so # it might give some false negatives. Appending redundant 'include' # lines is safe, though. if "versioneer.py" not in simple_includes: print(" appending 'versioneer.py' to MANIFEST.in") with open(manifest_in, "a") as f: f.write("include versioneer.py\n") else: print(" 'versioneer.py' already in MANIFEST.in") if cfg.versionfile_source not in simple_includes: print(" appending versionfile_source ('%s') to MANIFEST.in" % cfg.versionfile_source) with open(manifest_in, "a") as f: f.write("include %s\n" % cfg.versionfile_source) else: print(" versionfile_source already in MANIFEST.in") # Make VCS-specific changes. For git, this means creating/changing # .gitattributes to mark _version.py for export-time keyword # substitution. do_vcs_install(manifest_in, cfg.versionfile_source, ipy) return 0 def scan_setup_py(): found = set() setters = False errors = 0 with open("setup.py", "r") as f: for line in f.readlines(): if "import versioneer" in line: found.add("import") if "versioneer.get_cmdclass()" in line: found.add("cmdclass") if "versioneer.get_version()" in line: found.add("get_version") if "versioneer.VCS" in line: setters = True if "versioneer.versionfile_source" in line: setters = True if len(found) != 3: print("") print("Your setup.py appears to be missing some important items") print("(but I might be wrong). Please make sure it has something") print("roughly like the following:") print("") print(" import versioneer") print(" setup( version=versioneer.get_version(),") print(" cmdclass=versioneer.get_cmdclass(), ...)") print("") errors += 1 if setters: print("You should remove lines like 'versioneer.VCS = ' and") print("'versioneer.versionfile_source = ' . This configuration") print("now lives in setup.cfg, and should be removed from setup.py") print("") errors += 1 return errors if __name__ == "__main__": cmd = sys.argv[1] if cmd == "setup": errors = do_setup() errors += scan_setup_py() if errors: sys.exit(1) lttnganalyses-0.6.1/lttng-periodlog0000775000175000017500000000235512746220524021127 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import periods if __name__ == '__main__': periods.runlog() lttnganalyses-0.6.1/lttng-periodfreq0000775000175000017500000000235112746220524021277 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import periods if __name__ == '__main__': periods.runfreq() lttnganalyses-0.6.1/lttng-analyses-record0000775000175000017500000001031112745424023022224 0ustar mjeansonmjeanson00000000000000#!/bin/bash # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # Helper to setup a local LTTng tracing session with the appropriate # settings for the lttng analyses scripts SESSION_NAME="lttng-analysis-$RANDOM" destroy() { lttng destroy $SESSION_NAME >/dev/null echo "" echo "You can now launch the analyses scripts on /$TRACEPATH" exit 0 } if test "$1" = "-h" -o "$1" = "--help"; then echo "usage : $0" exit 0 fi pgrep -u root lttng-sessiond >/dev/null if test $? != 0; then echo "Starting lttng-sessiond as root (trying sudo, start manually if \ it fails)" sudo lttng-sessiond -d if test $? != 0; then exit 1 fi fi SUDO="" groups|grep tracing >/dev/null if test $? != 0; then echo "You are not a member of the tracing group, so you need root \ access, the script will try with sudo" SUDO="sudo" fi # check if lttng command if in the path # check if the user can execute the command (with sudo if not in tracing group) # check if lttng-modules is installed $SUDO lttng list -k | grep sched_switch >/dev/null if test $? != 0; then echo "Something went wrong executing \"$SUDO lttng list -k | grep sched_switch\", \ try to fix the problem manually and then start the script again" fi # if our random session name was already in use, add more randomness... $SUDO lttng list | grep $SESSION_NAME if test $? = 0; then SESSION_NAME="$SESSION_NAME-$RANDOM" fi $SUDO lttng list | grep $SESSION_NAME if test $? = 0; then echo "Cannot create a random session name, something must be wrong" exit 2 fi lttng create $SESSION_NAME >/tmp/lttngout [[ $? != 0 ]] && exit 2 TRACEPATH=$(grep Traces /tmp/lttngout | cut -d'/' -f2-) rm /tmp/lttngout trap "destroy" SIGINT SIGTERM lttng enable-channel -k chan1 --subbuf-size=8M >/dev/null # events that always work lttng enable-event -s $SESSION_NAME -k sched_switch,sched_wakeup,sched_waking,block_rq_complete,block_rq_issue,block_bio_remap,block_bio_backmerge,netif_receive_skb,net_dev_xmit,sched_process_fork,sched_process_exec,lttng_statedump_process_state,lttng_statedump_file_descriptor,lttng_statedump_block_device,mm_vmscan_wakeup_kswapd,mm_page_free,mm_page_alloc,block_dirty_buffer,irq_handler_entry,irq_handler_exit,softirq_entry,softirq_exit,softirq_raise,irq_softirq_entry,irq_softirq_exit,irq_softirq_raise,kmem_mm_page_alloc,kmem_mm_page_free -c chan1 >/dev/null [[ $? != 0 ]] && echo "Warning: some events were not enabled, some analyses might not be complete" # events that might fail on specific kernels and that are not mandatory lttng enable-event -s $SESSION_NAME -k writeback_pages_written -c chan1 >/dev/null 2>&1 [[ $? != 0 ]] && echo "Warning: Optional event writeback_pages_written could not be enabled, everything will still work (experimental feature)" lttng enable-event -s $SESSION_NAME -k -c chan1 --syscall -a >/dev/null [[ $? != 0 ]] && exit 2 # if you want to add Perf counters, do something like that : #lttng add-context -s $SESSION_NAME -k -t perf:cache-misses -t perf:major-faults -t perf:branch-load-misses >/dev/null lttng start $SESSION_NAME >/dev/null [[ $? != 0 ]] && exit 2 echo -n "The trace is now recording, press ctrl+c to stop it " while true; do echo -n "." sleep 1 done destroy lttnganalyses-0.6.1/lttnganalyses/0000775000175000017500000000000013033742625020753 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/lttnganalyses/core/0000775000175000017500000000000013033742625021703 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/lttnganalyses/core/irq.py0000664000175000017500000001345312746667037023073 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from .analysis import Analysis, PeriodData class _PeriodData(PeriodData): def __init__(self): # Indexed by irq 'id' (irq or vec) self.hard_irq_stats = {} self.softirq_stats = {} # Log of individual interrupts self.irq_list = [] class IrqAnalysis(Analysis): def __init__(self, state, conf): notification_cbs = { 'irq_handler_entry': self._process_irq_handler_entry, 'irq_handler_exit': self._process_irq_handler_exit, 'softirq_exit': self._process_softirq_exit } super().__init__(state, conf, notification_cbs) def _create_period_data(self): return _PeriodData() def _process_irq_handler_entry(self, period_data, **kwargs): id = kwargs['id'] name = kwargs['irq_name'] if id not in period_data.hard_irq_stats: period_data.hard_irq_stats[id] = HardIrqStats(name) elif name not in period_data.hard_irq_stats[id].names: period_data.hard_irq_stats[id].names.append(name) def _process_irq_handler_exit(self, period_data, **kwargs): irq = kwargs['hard_irq'] if not self._filter_cpu(irq.cpu_id): return if self._conf.min_duration is not None and \ irq.duration < self._conf.min_duration: return if self._conf.max_duration is not None and \ irq.duration > self._conf.max_duration: return period_data.irq_list.append(irq) if irq.id not in period_data.hard_irq_stats: period_data.hard_irq_stats[irq.id] = HardIrqStats() period_data.hard_irq_stats[irq.id].update_stats(irq) def _process_softirq_exit(self, period_data, **kwargs): irq = kwargs['softirq'] if not self._filter_cpu(irq.cpu_id): return if self._conf.min_duration is not None and \ irq.duration < self._conf.min_duration: return if self._conf.max_duration is not None and \ irq.duration > self._conf.max_duration: return period_data.irq_list.append(irq) if irq.id not in period_data.softirq_stats: name = SoftIrqStats.names[irq.id] period_data.softirq_stats[irq.id] = SoftIrqStats(name) period_data.softirq_stats[irq.id].update_stats(irq) class IrqStats(): def __init__(self, name): self._name = name self.min_duration = None self.max_duration = None self.total_duration = 0 self.irq_list = [] @property def name(self): return self._name @property def count(self): return len(self.irq_list) def update_stats(self, irq): if self.min_duration is None or irq.duration < self.min_duration: self.min_duration = irq.duration if self.max_duration is None or irq.duration > self.max_duration: self.max_duration = irq.duration self.total_duration += irq.duration self.irq_list.append(irq) def reset(self): self.min_duration = None self.max_duration = None self.total_duration = 0 self.irq_list = [] class HardIrqStats(IrqStats): NAMES_SEPARATOR = ', ' def __init__(self, name='unknown'): super().__init__(name) self.names = [name] @property def name(self): return self.NAMES_SEPARATOR.join(self.names) class SoftIrqStats(IrqStats): # from include/linux/interrupt.h names = {0: 'HI_SOFTIRQ', 1: 'TIMER_SOFTIRQ', 2: 'NET_TX_SOFTIRQ', 3: 'NET_RX_SOFTIRQ', 4: 'BLOCK_SOFTIRQ', 5: 'BLOCK_IOPOLL_SOFTIRQ', 6: 'TASKLET_SOFTIRQ', 7: 'SCHED_SOFTIRQ', 8: 'HRTIMER_SOFTIRQ', 9: 'RCU_SOFTIRQ'} def __init__(self, name): super().__init__(name) self.min_raise_latency = None self.max_raise_latency = None self.total_raise_latency = 0 self.raise_count = 0 def update_stats(self, irq): super().update_stats(irq) if irq.raise_ts is None: return raise_latency = irq.begin_ts - irq.raise_ts if self.min_raise_latency is None or \ raise_latency < self.min_raise_latency: self.min_raise_latency = raise_latency if self.max_raise_latency is None or \ raise_latency > self.max_raise_latency: self.max_raise_latency = raise_latency self.total_raise_latency += raise_latency self.raise_count += 1 def reset(self): super().reset() self.min_raise_latency = None self.max_raise_latency = None self.total_raise_latency = 0 self.raise_count = 0 lttnganalyses-0.6.1/lttnganalyses/core/analysis.py0000664000175000017500000002716112745737273024123 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import period as core_period import enum class AnalysisConfig: def __init__(self): self.refresh_period = None self.begin_ts = None self.end_ts = None self.min_duration = None self.max_duration = None self.proc_list = None self.tid_list = None self.cpu_list = None self.period_def_registry = core_period.PeriodDefinitionRegistry() # base class for all specific period data classes in specific analyses class PeriodData: def _set_period(self, period): self._period = period @property def period(self): return self._period @enum.unique class AnalysisCallbackType(enum.Enum): TICK_CB = 'tick' class Analysis: def __init__(self, state, conf, state_cbs): self._state = state self._conf = conf self._state_cbs = state_cbs self._period_key = None self._first_event_ts = None self._last_event_ts = None self._notification_cli_cbs = {} self._cbs = {} period_cbs = { core_period.PeriodEngineCallbackType.PERIOD_BEGIN: self._on_period_begin, core_period.PeriodEngineCallbackType.PERIOD_END: self._on_period_end, } self._period_engine = core_period.PeriodEngine( self._conf.period_def_registry, period_cbs) # This dict maps period objects (from the period module) to # period data objects. Period data objects are created by a # specific analysis implementing _create_period_data(). self._period_data = {} # Mapping between a period name and it's nesting level (0 = root). self._period_nesting = {} self.started = False self.ended = False @property def first_event_ts(self): return self._first_event_ts @property def last_event_ts(self): return self._last_event_ts def period_nesting_level(self, period_name): if self._conf.period_def_registry.is_empty or period_name is None: return 0 return self._period_nesting[period_name] # Returns the period data object associated with a given period. def _get_period_data(self, period): return self._period_data.get(period) # Sets the period data object associated with a given period. def _set_period_data(self, period, data): self._period_data[period] = data # Removes the period data object associated with a given period. def _remove_period_data(self, period): del self._period_data[period] # Creates the unique "definition-less" period. This is used when # there are no user-specified periods. def _create_defless_period(self, evt): period = core_period.Period(None, None, evt, None) self._on_period_begin(period) # Returns the "definition-less" period. def _get_defless_period(self): if len(self._period_data) == 0: return return next(iter(self._period_data.keys())) # Removes the "definition-less" period. def _remove_defless_period(self, completed, evt): period = self._get_defless_period() if period is None: return period.end_evt = evt period.completed = completed self._on_period_end(period) assert(len(self._period_data) == 0) # Creates a fresh specific period data object. This must be # implemented by a specific analysis. def _create_period_data(self): raise NotImplementedError() def _begin_period_cb(self, period_data): pass def _end_period_cb(self, period_data, completed, begin_captures, end_captures): pass # This is called back by the period engine when a new period is # created. `period` is the created period, and `evt` is the event # that triggered the beginning of this period (the original event, # while `period.begin_evt` is a copy of this event). def _on_period_begin(self, period): # create the specific analysis's period data object period_data = self._create_period_data() # associate the period data object to this period object period_data._set_period(period) self._set_period_data(period, period_data) # register state notification callbacks with this period data object self._state.register_notification_cbs(period_data, self._state_cbs) # call specific analysis's beginning of period callback self._begin_period_cb(period_data) # This is called back by the period engine when a period is finished, # or closed. # # If `period.completed` is True, then the period finishes because # its ending expression was satisfied by an event (`period.end_evt`). # Otherwise, the period finishes because one of its ancestors finishes, # or because the period engine user asked for it. def _on_period_end(self, period): # get the period data object associated with this period object period_data = self._get_period_data(period) # call specific analysis's end of period callback self._end_period_cb(period_data, period.completed, period.begin_captures, period.end_captures) # send tick notification to owner (CLI) self._send_notification_cb(AnalysisCallbackType.TICK_CB, period_data, end_ns=self.last_event_ts) # clear registered state notification callbacks associated with # this period self._state.clear_period_notification_cbs(period_data) # remove this period data object self._remove_period_data(period) # This is called by the owner of this analysis when an event must # be processed (`ev`). def process_event(self, ev): self._check_analysis_end(ev) if self.ended: return if self._first_event_ts is None: self._first_event_ts = ev.timestamp self._last_event_ts = ev.timestamp if not self.started: if self._conf.begin_ts: self._check_analysis_begin(ev) if not self.started: return else: self.started = True # Run the period engine. This call has the effect of calling # back _on_period_begin() or _on_period_end(), zero or more # times, for each beginning and ending period according to the # registered period definitions. self._period_engine.process_event(ev) # check the refresh period conditions self._check_refresh(ev) # Create the mapping between a period name and its nesting level. # Recursively iterate over all children. def _get_period_nesting_level(self, period_def, level): for child in period_def.children: self._get_period_nesting_level(child, level + 1) self._period_nesting[period_def.name] = level # Iterate over all root period definitions to create the map between # a period name and it's nesting level. def _create_period_nesting_map(self): for period_def in self._conf.period_def_registry._root_period_defs: self._get_period_nesting_level(period_def, 0) # Called by the owner of this analysis to indicate that this # analysis is starting. def begin_analysis(self, evt): # If we do not have any period defined, create the # "definition-less" period starting at the first event. if (self._conf.period_def_registry.is_empty and self._conf.begin_ts is None): self._create_defless_period(evt) self._create_period_nesting_map() def end_analysis(self): # let the periods know that it is the last one self.ended = True # This is the end of the analysis, so we need to remove all # the existing periods. This means either remove all the existing # periods in the period engine, or remove the unique, # "definition-less" period created here. if self._conf.period_def_registry.is_empty: self._remove_defless_period(False, None) else: self._period_engine.remove_all_periods() self._period_data.clear() # Send an empty TICK notification if the CLI needs to # do something at the end even if there are no existing # periods. self._send_notification_cb(AnalysisCallbackType.TICK_CB, None, end_ns=self._last_event_ts) def register_notification_cbs(self, cbs): for name in cbs: if name not in self._notification_cli_cbs: self._notification_cli_cbs[name] = [] self._notification_cli_cbs[name].append(cbs[name]) def _send_notification_cb(self, name, period, **kwargs): if name in self._notification_cli_cbs: for cb in self._notification_cli_cbs[name]: cb(period, **kwargs) def _register_cbs(self, cbs): self._cbs = cbs def _process_event_cb(self, ev): name = ev.name if name in self._cbs: self._cbs[name](ev) elif 'syscall_entry' in self._cbs and \ (name.startswith('sys_') or name.startswith('syscall_entry_')): self._cbs['syscall_entry'](ev) elif 'syscall_exit' in self._cbs and \ (name.startswith('exit_syscall') or name.startswith('syscall_exit_')): self._cbs['syscall_exit'](ev) def _check_analysis_begin(self, ev): if self._conf.begin_ts and ev.timestamp >= self._conf.begin_ts: self._create_defless_period(ev) self.started = True def _check_analysis_end(self, ev): if self._conf.end_ts and ev.timestamp > self._conf.end_ts: self.ended = True def _check_refresh(self, evt): if self._conf.refresh_period is None: return period = self._get_defless_period() if evt.timestamp >= (period.begin_evt.timestamp + self._conf.refresh_period): # remove the current period and create a new one self._remove_defless_period(True, evt) self._create_defless_period(evt) def _filter_process(self, proc): if not proc: return True if self._conf.proc_list and proc.comm not in self._conf.proc_list: return False if self._conf.tid_list and proc.tid not in self._conf.tid_list: return False return True def _filter_cpu(self, cpu): return not (self._conf.cpu_list and cpu not in self._conf.cpu_list) lttnganalyses-0.6.1/lttnganalyses/core/period.py0000664000175000017500000005301713033475105023541 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import event as core_event from functools import partial import babeltrace as bt import enum class InvalidPeriodDefinition(Exception): pass # period definition registry, owner of the whole tree of periods class PeriodDefinitionRegistry: def __init__(self): self._root_period_defs = set() self._named_period_defs = {} # name to hierarchy self._full_period_path = {} def period_full_path(self, name): return self._full_period_path[name] def has_period_def(self, name): return name in self._named_period_defs def add_full_period_path(self, period_name, parent_name): period_path = [period_name] period_path_str = "" if parent_name is None: self._full_period_path[period_name] = period_name return parent = self.get_period_def(parent_name) while parent is not None: period_path.append(parent.name) parent = parent.parent period_path.reverse() for i in period_path: if len(period_path_str) == 0: period_path_str = i else: period_path_str = "%s/%s" % (period_path_str, i) self._full_period_path[period_name] = period_path_str def add_period_def(self, parent_name, period_name, begin_expr, end_expr, begin_captures_exprs, end_captures_exprs): # validate unique period name (if named) if self.has_period_def(period_name): raise InvalidPeriodDefinition('Cannot redefine period "{}"'.format( period_name)) # validate that parent exists if it's set if parent_name is not None and not self.has_period_def(parent_name): fmt = 'Cannot find parent period named "{}" (as parent of ' \ 'period "{}")' msg = fmt.format(parent_name, period_name) raise InvalidPeriodDefinition(msg) # create period, and associate parent and children parent = None if parent_name is not None: parent = self.get_period_def(parent_name) period_def = PeriodDefinition(parent, period_name, begin_expr, end_expr, begin_captures_exprs, end_captures_exprs) if parent is not None: parent.children.add(period_def) # validate new period definition PeriodDefinitionValidator(period_def) if period_def.parent is None: self._root_period_defs.add(period_def) if period_def.name is not None: self._named_period_defs[period_def.name] = period_def self.add_full_period_path(period_name, parent_name) def get_period_def(self, name): return self._named_period_defs.get(name) @property def root_period_defs(self): for period_def in self._root_period_defs: yield period_def @property def named_period_defs(self): return self._named_period_defs @property def is_empty(self): return len(self._root_period_defs) == 0 and \ len(self._named_period_defs) == 0 # definition of a period class PeriodDefinition: def __init__(self, parent, name, begin_expr, end_expr, begin_captures_exprs, end_captures_exprs): self._parent = parent self._children = set() self._name = name self._begin_expr = begin_expr self._end_expr = end_expr self._begin_captures_exprs = begin_captures_exprs self._end_captures_exprs = end_captures_exprs @property def name(self): return self._name @property def parent(self): return self._parent @property def begin_expr(self): return self._begin_expr @property def end_expr(self): return self._end_expr @property def begin_captures_exprs(self): return self._begin_captures_exprs @property def end_captures_exprs(self): return self._end_captures_exprs @property def children(self): return self._children class _Expression: pass class _BinaryExpression(_Expression): def __init__(self, lh_expr, rh_expr): self._lh_expr = lh_expr self._rh_expr = rh_expr @property def lh_expr(self): return self._lh_expr @property def rh_expr(self): return self._rh_expr class _UnaryExpression(_Expression): def __init__(self, expr): self._expr = expr @property def expr(self): return self._expr class LogicalNot(_UnaryExpression): def __repr__(self): return '!({})'.format(self.expr) class LogicalAnd(_BinaryExpression): def __repr__(self): return '({} && {})'.format(self.lh_expr, self.rh_expr) class LogicalOr(_BinaryExpression): def __repr__(self): return '({} || {})'.format(self.lh_expr, self.rh_expr) class GlobEq(_BinaryExpression): def __init__(self, lh_expr, rh_expr): super().__init__(lh_expr, rh_expr) self._compile() def _compile(self): import fnmatch import re pattern = self.rh_expr.value regex = fnmatch.translate(pattern) self._regex = re.compile(regex) @property def regex(self): return self._regex def __repr__(self): return '({} =* {})'.format(self.lh_expr, self.rh_expr) class Eq(_BinaryExpression): def __repr__(self): return '({} == {})'.format(self.lh_expr, self.rh_expr) class Lt(_BinaryExpression): def __repr__(self): return '({} < {})'.format(self.lh_expr, self.rh_expr) class LtEq(_BinaryExpression): def __repr__(self): return '({} <= {})'.format(self.lh_expr, self.rh_expr) class Gt(_BinaryExpression): def __repr__(self): return '({} > {})'.format(self.lh_expr, self.rh_expr) class GtEq(_BinaryExpression): def __repr__(self): return '({} >= {})'.format(self.lh_expr, self.rh_expr) class Number(_Expression): def __init__(self, value): self._value = value @property def value(self): return self._value def __repr__(self): return '{}'.format(self.value) class String(_Expression): def __init__(self, value): self._value = value @property def value(self): return self._value def __repr__(self): return '"{}"'.format(self.value) @enum.unique class DynScope(enum.Enum): AUTO = 'auto' TPH = '$pkt_header' SPC = '$pkt_ctx' SEH = '$header' SEC = '$stream_ctx' EC = '$ctx' EP = '$payload' class _SingleChildNode(_Expression): def __init__(self, child): self._child = child @property def child(self): return self._child class ParentScope(_SingleChildNode): def __repr__(self): return '$parent.{}'.format(self.child) class BeginScope(_SingleChildNode): def __repr__(self): return '$begin.{}'.format(self.child) class EventScope(_SingleChildNode): def __repr__(self): return '$evt.{}'.format(self.child) class DynamicScope(_SingleChildNode): def __init__(self, dyn_scope, child): super().__init__(child) self._dyn_scope = dyn_scope @property def dyn_scope(self): return self._dyn_scope def __repr__(self): if self._dyn_scope == DynScope.AUTO: return repr(self.child) return '{}.{}'.format(self.dyn_scope.value, self.child) class EventFieldName(_Expression): def __init__(self, name): self._name = name @property def name(self): return self._name def __repr__(self): return self._name class EventName(_Expression): def __repr__(self): return '$name' class IllegalExpression(Exception): pass class PeriodDefinitionValidator: def __init__(self, period_def): self._period_def = period_def self._validate_expr_cbs = { LogicalNot: self._validate_unary_expr, LogicalAnd: self._validate_binary_expr, LogicalOr: self._validate_binary_expr, GlobEq: self._validate_comp, Eq: self._validate_comp, Lt: self._validate_comp, LtEq: self._validate_comp, Gt: self._validate_comp, GtEq: self._validate_comp, ParentScope: self._validate_parent_scope, } self._validate_expr(period_def.begin_expr) self._validate_expr(period_def.end_expr) def _validate_unary_expr(self, not_expr): self._validate_expr(not_expr.expr) def _validate_binary_expr(self, and_expr): self._validate_expr(and_expr.lh_expr) self._validate_expr(and_expr.rh_expr) def _validate_parent_scope(self, scope): if self._period_def.parent is None: raise IllegalExpression('Cannot refer to parent context without ' 'a named parent period') if type(scope.child) is not BeginScope: raise IllegalExpression('Must refer to the begin context in a ' 'parent context') self._validate_expr(scope.child) def _validate_comp(self, comp_expr): self._validate_expr(comp_expr.lh_expr) self._validate_expr(comp_expr.rh_expr) def _validate_expr(self, expr): if type(expr) in self._validate_expr_cbs: self._validate_expr_cbs[type(expr)](expr) class _MatchContext: def __init__(self, evt, begin_evt, parent_begin_evt): self._evt = evt self._begin_evt = begin_evt self._parent_begin_evt = parent_begin_evt @property def evt(self): return self._evt @property def begin_evt(self): return self._begin_evt @property def parent_begin_evt(self): return self._parent_begin_evt _DYN_SCOPE_TO_BT_CTF_SCOPE = { DynScope.TPH: bt.CTFScope.TRACE_PACKET_HEADER, DynScope.SPC: bt.CTFScope.STREAM_PACKET_CONTEXT, DynScope.SEH: bt.CTFScope.STREAM_EVENT_HEADER, DynScope.SEC: bt.CTFScope.STREAM_EVENT_CONTEXT, DynScope.EC: bt.CTFScope.EVENT_CONTEXT, DynScope.EP: bt.CTFScope.EVENT_FIELDS, } def _resolve_event_expr(event, expr): # event not found if event is None: return # event name if type(expr.child) is EventName: return event.name # default, automatic dynamic scope dyn_scope = DynScope.AUTO if type(expr.child) is DynamicScope: # select specific dynamic scope expr = expr.child dyn_scope = expr.dyn_scope if type(expr.child) is EventFieldName: expr = expr.child if dyn_scope == DynScope.AUTO: # automatic dynamic scope if expr.name in event: return event[expr.name] # event field not found return # specific dynamic scope bt_ctf_scope = _DYN_SCOPE_TO_BT_CTF_SCOPE[dyn_scope] return event.field_with_scope(expr.name, bt_ctf_scope) assert(False) # This exquisite function takes an expression and resolves it to # an actual value (Python's number/string) considering the current # matching context. def _resolve_expr(expr, match_context): if type(expr) is ParentScope: begin_scope = expr.child event_scope = begin_scope.child return _resolve_event_expr(match_context.parent_begin_evt, event_scope) if type(expr) is BeginScope: # event in the begin context event_scope = expr.child return _resolve_event_expr(match_context.begin_evt, event_scope) if type(expr) is EventScope: # current event return _resolve_event_expr(match_context.evt, expr) if type(expr) is Number: return expr.value if type(expr) is String: return expr.value assert(False) class _Matcher: def __init__(self, expr, match_context): self._match_context = match_context self._expr_matchers = { LogicalAnd: self._and_expr_matches, LogicalOr: self._or_expr_matches, LogicalNot: self._not_expr_matches, GlobEq: self._glob_eq_expr_matches, Eq: partial(self._comp_expr_matches, lambda lh, rh: lh == rh), Lt: partial(self._comp_expr_matches, lambda lh, rh: lh < rh), LtEq: partial(self._comp_expr_matches, lambda lh, rh: lh <= rh), Gt: partial(self._comp_expr_matches, lambda lh, rh: lh > rh), GtEq: partial(self._comp_expr_matches, lambda lh, rh: lh >= rh), } self._matches = self._expr_matches(expr) def _and_expr_matches(self, expr): return (self._expr_matches(expr.lh_expr) and self._expr_matches(expr.rh_expr)) def _or_expr_matches(self, expr): return (self._expr_matches(expr.lh_expr) or self._expr_matches(expr.rh_expr)) def _not_expr_matches(self, expr): return not self._expr_matches(expr.expr) def _glob_eq_expr_matches(self, expr): def compfn(lh, rh): return bool(expr.regex.match(lh)) return self._comp_expr_matches(compfn, expr) def _comp_expr_matches(self, compfn, expr): lh_value = _resolve_expr(expr.lh_expr, self._match_context) rh_value = _resolve_expr(expr.rh_expr, self._match_context) # make sure both sides are found if lh_value is None or rh_value is None: return False # cast RHS to int if LHS is an int if type(lh_value) is int and type(rh_value) is float: rh_value = int(rh_value) # compare types first if type(lh_value) is not type(rh_value): return False # compare field to a literal value return compfn(lh_value, rh_value) def _expr_matches(self, expr): return self._expr_matchers[type(expr)](expr) @property def matches(self): return self._matches def _expr_matches(expr, match_context): return _Matcher(expr, match_context).matches def create_conjunction_from_exprs(exprs): if len(exprs) == 0: return cur_expr = exprs[0] for expr in exprs[1:]: cur_expr = LogicalAnd(cur_expr, expr) return cur_expr def create_disjunction_from_exprs(exprs): if len(exprs) == 0: return cur_expr = exprs[0] for expr in exprs[1:]: cur_expr = LogicalOr(cur_expr, expr) return cur_expr @enum.unique class PeriodEngineCallbackType(enum.Enum): PERIOD_BEGIN = 1 PERIOD_END = 2 class Period: def __init__(self, definition, parent, begin_evt, begin_captures): begin_evt_copy = core_event.Event(begin_evt) self._begin_evt = begin_evt_copy self._end_evt = None self._completed = False self._definition = definition self._parent = parent self._children = set() self._begin_captures = begin_captures self._end_captures = {} @property def begin_evt(self): return self._begin_evt @property def end_evt(self): return self._end_evt @end_evt.setter def end_evt(self, evt): self._end_evt = evt @property def definition(self): return self._definition @property def parent(self): return self._parent @property def children(self): return self._children @property def completed(self): return self._completed @completed.setter def completed(self, value): self._completed = value @property def begin_captures(self): return self._begin_captures @property def end_captures(self): return self._end_captures class PeriodEngine: def __init__(self, registry, cbs): self._registry = registry self._cbs = cbs self._root_periods = set() def _cb_period_end(self, period): self._cbs[PeriodEngineCallbackType.PERIOD_END](period) def _cb_period_begin(self, period): self._cbs[PeriodEngineCallbackType.PERIOD_BEGIN](period) def _create_period(self, definition, parent, begin_evt, begin_captures): return Period(definition, parent, begin_evt, begin_captures) def _get_captures(self, captures_exprs, match_context): captures = {} for name, capture_expr in captures_exprs.items(): captures[name] = _resolve_expr(capture_expr, match_context) return captures def _process_event_add_periods(self, parent_period, child_periods, child_period_defs, evt): periods_to_add = set() for child_period_def in child_period_defs: match_context = self._create_begin_match_context(parent_period, evt) if _expr_matches(child_period_def.begin_expr, match_context): # match! add period captures = self._get_captures( child_period_def.begin_captures_exprs, match_context) period = self._create_period(child_period_def, parent_period, evt, captures) periods_to_add.add(period) # safe to add child periods now, outside the iteration for period_to_add in periods_to_add: self._cb_period_begin(period_to_add) child_periods.add(period_to_add) for child_period in child_periods: self._process_event_add_periods(child_period, child_period.children, child_period.definition.children, evt) def _process_event_begin(self, evt): defs = self._registry.root_period_defs self._process_event_add_periods(None, self._root_periods, defs, evt) def _create_begin_match_context(self, parent_period, evt): parent_begin_evt = None if parent_period is not None: parent_begin_evt = parent_period.begin_evt return _MatchContext(evt, evt, parent_begin_evt) def _create_end_match_context(self, period, evt): parent_begin_evt = None if period.parent is not None: parent_begin_evt = period.parent.begin_evt return _MatchContext(evt, period.begin_evt, parent_begin_evt) def _process_event_remove_period(self, child_periods, evt): for child_period in child_periods: self._process_event_remove_period(child_period.children, evt) child_periods_to_remove = set() for child_period in child_periods: match_context = self._create_end_match_context(child_period, evt) if _expr_matches(child_period.definition.end_expr, match_context): # set period's end captures end_captures_exprs = \ child_period.definition.end_captures_exprs captures = self._get_captures(end_captures_exprs, match_context) child_period._end_captures = captures # mark as to be removed child_periods_to_remove.add(child_period) # safe to remove child periods now, outside the iteration for child_period_to_remove in child_periods_to_remove: # set period's ending event and completed property child_period_to_remove.end_evt = evt child_period_to_remove.completed = True # also remove its own remaining child periods self._remove_periods(child_period_to_remove.children, evt) # call end of period user callback (this period matched) self._cb_period_end(child_period_to_remove) # remove period from set child_periods.remove(child_period_to_remove) def _process_event_end(self, evt): self._process_event_remove_period(self._root_periods, evt) def process_event(self, evt): self._process_event_end(evt) self._process_event_begin(evt) def _remove_periods(self, child_periods, evt): for child_period in child_periods: self._remove_periods(child_period.children, evt) # safe to remove child periods now, outside the iteration for child_period in child_periods: # set period's ending event and completed property child_period.end_evt = evt child_period.completed = False # call end of period user callback self._cb_period_end(child_period) child_periods.clear() def remove_all_periods(self): self._remove_periods(self._root_periods, None) @property def root_periods(self): return self._root_periods lttnganalyses-0.6.1/lttnganalyses/core/stats.py0000664000175000017500000000416512665072151023421 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from collections import namedtuple PrioEvent = namedtuple('PrioEvent', ['timestamp', 'prio']) class Stats(): def reset(self): raise NotImplementedError() class Process(Stats): def __init__(self, pid, tid, comm): self.pid = pid self.tid = tid self.comm = comm self.prio_list = [] @classmethod def new_from_process(cls, proc): return cls(proc.pid, proc.tid, proc.comm) def update_prio(self, timestamp, prio): self.prio_list.append(PrioEvent(timestamp, prio)) def reset(self): if self.prio_list: # Keep the last prio as the first for the next period self.prio_list = self.prio_list[-1:] class IO(Stats): def __init__(self): # Number of bytes read or written self.read = 0 self.write = 0 def reset(self): self.read = 0 self.write = 0 def __iadd__(self, other): self.read += other.read self.write += other.write return self lttnganalyses-0.6.1/lttnganalyses/core/sched.py0000664000175000017500000001270612746220524023350 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import stats from .analysis import Analysis, PeriodData class _PeriodData(PeriodData): def __init__(self): # Log of individual wake scheduling events self.sched_list = [] self.min_latency = None self.max_latency = None self.total_latency = 0 self.tids = {} class SchedAnalysis(Analysis): def __init__(self, state, conf): notification_cbs = { 'sched_switch_per_tid': self._process_sched_switch, 'prio_changed': self._process_prio_changed, } super().__init__(state, conf, notification_cbs) def count(self, period_data): return len(period_data.sched_list) def _create_period_data(self): return _PeriodData() def _process_sched_switch(self, period_data, **kwargs): cpu_id = kwargs['cpu_id'] switch_ts = kwargs['timestamp'] wakee_proc = kwargs['wakee_proc'] waker_proc = kwargs['waker_proc'] next_tid = kwargs['next_tid'] wakeup_ts = wakee_proc.last_wakeup # print(period_data) if not self._filter_process(wakee_proc): return if not self._filter_cpu(cpu_id): return if wakeup_ts is None: return latency = switch_ts - wakeup_ts if self._conf.min_duration is not None and \ latency < self._conf.min_duration: return if self._conf.max_duration is not None and \ latency > self._conf.max_duration: return if waker_proc is not None and waker_proc.tid not in period_data.tids: period_data.tids[waker_proc.tid] = \ ProcessSchedStats.new_from_process(waker_proc) period_data.tids[waker_proc.tid].update_prio(switch_ts, waker_proc.prio) if next_tid not in period_data.tids: period_data.tids[next_tid] = \ ProcessSchedStats.new_from_process(wakee_proc) period_data.tids[next_tid].update_prio(switch_ts, wakee_proc.prio) sched_event = SchedEvent( wakeup_ts, switch_ts, wakee_proc, waker_proc, cpu_id) period_data.tids[next_tid].update_stats(sched_event) self._update_stats(period_data, sched_event) def _process_prio_changed(self, period_data, **kwargs): timestamp = kwargs['timestamp'] prio = kwargs['prio'] tid = kwargs['tid'] if tid not in period_data.tids: return period_data.tids[tid].update_prio(timestamp, prio) def _update_stats(self, period_data, sched_event): if period_data.min_latency is None or \ sched_event.latency < period_data.min_latency: period_data.min_latency = sched_event.latency if period_data.max_latency is None or \ sched_event.latency > period_data.max_latency: period_data.max_latency = sched_event.latency period_data.total_latency += sched_event.latency period_data.sched_list.append(sched_event) class ProcessSchedStats(stats.Process): def __init__(self, pid, tid, comm): super().__init__(pid, tid, comm) self.min_latency = None self.max_latency = None self.total_latency = 0 self.sched_list = [] @property def count(self): return len(self.sched_list) def update_stats(self, sched_event): if self.min_latency is None or sched_event.latency < self.min_latency: self.min_latency = sched_event.latency if self.max_latency is None or sched_event.latency > self.max_latency: self.max_latency = sched_event.latency self.total_latency += sched_event.latency self.sched_list.append(sched_event) def reset(self): super().reset() self.min_latency = None self.max_latency = None self.total_latency = 0 self.sched_list = [] class SchedEvent(): def __init__(self, wakeup_ts, switch_ts, wakee_proc, waker_proc, target_cpu): self.wakeup_ts = wakeup_ts self.switch_ts = switch_ts self.wakee_proc = wakee_proc self.waker_proc = waker_proc self.prio = wakee_proc.prio self.target_cpu = target_cpu self.latency = switch_ts - wakeup_ts lttnganalyses-0.6.1/lttnganalyses/core/periods.py0000664000175000017500000002060413033475105023720 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from .analysis import Analysis, PeriodData class _PeriodData(PeriodData): def __init(self): self._period_event = None @property def period_event(self): return self._period_event class PeriodAnalysis(Analysis): def __init__(self, state, conf): super().__init__(state, conf, {}) # This is a special case where we keep a global state instead of a # per-period state, since we are accumulating statistics about # all the periods. self._all_period_stats = {} self._all_period_list = [] self._all_total_duration = 0 self._all_min_duration = None self._all_max_duration = None # Internal map between currently active periods and their # corresponding PeriodEvent object. self._current_periods = {} def _create_period_data(self): return _PeriodData() @property def all_count(self): return len(self._all_period_list) @property def all_period_stats(self): return self._all_period_stats @property def all_period_list(self): return self._all_period_list @property def all_min_duration(self): return self._all_min_duration @property def all_max_duration(self): return self._all_max_duration @property def all_total_duration(self): return self._all_total_duration def update_global_stats(self, period_event): if self._all_min_duration is None or period_event.duration < \ self._all_min_duration: self._all_min_duration = period_event.duration if self._all_max_duration is None or period_event.duration > \ self._all_max_duration: self._all_max_duration = period_event.duration self._all_total_duration += period_event.duration # beginning of a new period def _begin_period_cb(self, period_data): # Only track real periods, not the dummy ones created # when no --period argument was passed. if period_data.period.definition is None: return period = period_data.period definition = period.definition if definition.name not in self._all_period_stats: if definition.name is None: name = "" else: name = definition.name self._all_period_stats[name] = \ PeriodStats.new_from_period(period_data.period) if period.parent is not None: parent = self._current_periods[period.parent] else: parent = None period_data._period_event = PeriodEvent( period.begin_evt.timestamp, definition.name, parent) self._all_period_list.append(period_data._period_event) self._current_periods[period] = period_data._period_event def _end_period_cb(self, period_data, completed, begin_captures, end_captures): period = period_data.period if period.definition is None: return if completed is False: # We should eventually warn the user here or keep # the event as uncomplete or in a separate table. self._all_period_list.remove(period_data._period_event) return if period.definition.name is None: name = "" else: name = period.definition.name period_data._period_event.finish( self.last_event_ts, begin_captures, end_captures) self._all_period_stats[name].update_stats( period_data._period_event) self.update_global_stats(period_data._period_event) if period.parent is not None: parent = self._current_periods[period.parent] parent.add_child(period_data._period_event) del self._current_periods[period] class PeriodStats(): def __init__(self, name): self.name = name self.period_list = [] self.min_duration = None self.max_duration = None self.total_duration = 0 @classmethod def new_from_period(cls, period): if period.definition.name is None: return cls("") return cls(period.definition.name) @property def count(self): return len(self.period_list) def update_stats(self, period_event): if self.min_duration is None or period_event.duration < \ self.min_duration: self.min_duration = period_event.duration if self.max_duration is None or period_event.duration > \ self.max_duration: self.max_duration = period_event.duration self.total_duration += period_event.duration self.period_list.append(period_event) class PeriodEvent(): def __init__(self, start_ts, name, parent): self._start_ts = start_ts self._name = name self._parent = parent self._end_ts = None self._begin_captures = None self._end_captures = None # Only during the aggregation phase, store the list # of children we want to output. self._children = [] @property def start_ts(self): return self._start_ts @property def end_ts(self): return self._end_ts @property def name(self): if self._name is None: return "" return self._name @property def duration(self): return self._end_ts - self._start_ts @property def begin_captures(self): return str(self._begin_captures) @property def end_captures(self): return str(self._end_captures) def filtered_captures(self, period_group_by): # List of tuple (field, value) for all the captured fields # present in the _period_group_by dict. _captures = [] if self._name not in period_group_by.keys(): return _captures if self._begin_captures is not None: for c in sorted(self._begin_captures.keys()): if c in period_group_by[self._name]: _captures.append(('%s.%s' % (self._name, c), self._begin_captures[c])) if self._end_captures is not None: for c in sorted(self._end_captures.keys()): if c in period_group_by[self._name]: _captures.append(('%s.%s' % (self._name, c), self._end_captures[c])) return _captures def full_captures(self): _captures = [] if self._begin_captures is not None: for c in self._begin_captures.keys(): _captures.append(('%s.%s' % (self._name, c), self._begin_captures[c])) if self._end_captures is not None: for c in self._end_captures.keys(): _captures.append(('%s.%s' % (self._name, c), self._end_captures[c])) return _captures @property def parent(self): return self._parent @property def children(self): return self._children def finish(self, end_ts, begin_captures, end_captures): self._end_ts = end_ts self._begin_captures = begin_captures self._end_captures = end_captures def add_child(self, child_period_event): self._children.append(child_period_event) lttnganalyses-0.6.1/lttnganalyses/core/io.py0000664000175000017500000004633112775773625022713 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import stats from .analysis import Analysis, PeriodData from ..linuxautomaton import sv class _PeriodData(PeriodData): def __init__(self): self.disks = {} self.ifaces = {} self.tids = {} class IoAnalysis(Analysis): def __init__(self, state, conf): notification_cbs = { 'net_dev_xmit': self._process_net_dev_xmit, 'netif_receive_skb': self._process_netif_receive_skb, 'block_rq_complete': self._process_block_rq_complete, 'io_rq_exit': self._process_io_rq_exit, 'create_fd': self._process_create_fd, 'close_fd': self._process_close_fd, 'update_fd': self._process_update_fd, 'create_parent_proc': self._process_create_parent_proc, 'lttng_statedump_block_device': self._process_statedump_block } super().__init__(state, conf, notification_cbs) if conf.cpu_list is not None: print('Warning: cpu filter not enabled on I/O analysis') def process_event(self, ev): super().process_event(ev) self._process_event_cb(ev) def _create_period_data(self): return _PeriodData() @property def disk_io_requests(self, period_data): for disk in period_data.disks.values(): for io_rq in disk.rq_list: yield io_rq def io_requests(self, period_data): return self._get_io_requests(period_data) def open_io_requests(self, period_data): return self._get_io_requests(period_data, sv.IORequest.OP_OPEN) def read_io_requests(self, period_data): return self._get_io_requests(period_data, sv.IORequest.OP_READ) def write_io_requests(self, period_data): return self._get_io_requests(period_data, sv.IORequest.OP_WRITE) def close_io_requests(self, period_data): return self._get_io_requests(period_data, sv.IORequest.OP_CLOSE) def sync_io_requests(self, period_data): return self._get_io_requests(period_data, sv.IORequest.OP_SYNC) def read_write_io_requests(self, period_data): return self._get_io_requests(period_data, sv.IORequest.OP_READ_WRITE) def _get_io_requests(self, period_data, io_operation=None): """Create a generator of syscall io requests by operation. Args: io_operation (IORequest.OP_*, optional): The operation of the io_requests to return. Return all IO requests if None. """ for proc in period_data.tids.values(): for io_rq in proc.rq_list: if isinstance(io_rq, sv.BlockIORequest): continue if io_operation is None or \ sv.IORequest.is_equivalent_operation(io_operation, io_rq.operation): yield io_rq def get_files_stats(self, period_data): files_stats = {} for proc_stats in period_data.tids.values(): for fd_list in proc_stats.fds.values(): for fd_stats in fd_list: filename = fd_stats.filename # Add process name to generic filenames to # distinguish them if FileStats.is_generic_name(filename): filename += ' (%s)' % proc_stats.comm if filename not in files_stats: files_stats[filename] = FileStats(filename) files_stats[filename].update_stats(fd_stats, proc_stats) return files_stats @staticmethod def _assign_fds_to_parent(proc, parent): if proc.fds: toremove = [] for fd in proc.fds: if fd not in parent.fds: parent.fds[fd] = proc.fds[fd] else: # best effort to fix the filename if not parent.get_fd(fd).filename: parent.get_fd(fd).filename = proc.get_fd(fd).filename toremove.append(fd) for fd in toremove: del proc.fds[fd] def _process_net_dev_xmit(self, period_data, **kwargs): name = kwargs['iface_name'] sent_bytes = kwargs['sent_bytes'] if name not in period_data.ifaces: period_data.ifaces[name] = IfaceStats(name) period_data.ifaces[name].sent_packets += 1 period_data.ifaces[name].sent_bytes += sent_bytes def _process_netif_receive_skb(self, period_data, **kwargs): name = kwargs['iface_name'] recv_bytes = kwargs['recv_bytes'] if name not in period_data.ifaces: period_data.ifaces[name] = IfaceStats(name) period_data.ifaces[name].recv_packets += 1 period_data.ifaces[name].recv_bytes += recv_bytes def _process_block_rq_complete(self, period_data, **kwargs): req = kwargs['req'] proc = kwargs['proc'] disk = kwargs['disk'] if disk.dev not in period_data.disks: period_data.disks[disk.dev] = DiskStats.new_from_disk(disk) period_data.disks[disk.dev].update_stats(req) if proc is not None: if proc.tid not in period_data.tids: period_data.tids[proc.tid] = ProcessIOStats.new_from_process( proc) period_data.tids[proc.tid].update_block_stats(req) def _process_io_rq_exit(self, period_data, **kwargs): proc = kwargs['proc'] parent_proc = kwargs['parent_proc'] io_rq = kwargs['io_rq'] if proc.tid not in period_data.tids: period_data.tids[proc.tid] = ProcessIOStats.new_from_process(proc) if parent_proc.tid not in period_data.tids: period_data.tids[parent_proc.tid] = ( ProcessIOStats.new_from_process(parent_proc)) proc_stats = period_data.tids[proc.tid] parent_stats = period_data.tids[parent_proc.tid] fd_types = {} if io_rq.errno is None: if io_rq.operation == sv.IORequest.OP_READ or \ io_rq.operation == sv.IORequest.OP_WRITE: if parent_stats.get_fd(io_rq.fd) is None: return fd_types['fd'] = parent_stats.get_fd(io_rq.fd).fd_type elif io_rq.operation == sv.IORequest.OP_READ_WRITE: if parent_stats.get_fd(io_rq.fd_in) is None: return if parent_stats.get_fd(io_rq.fd_out) is None: return fd_types['fd_in'] = parent_stats.get_fd(io_rq.fd_in).fd_type fd_types['fd_out'] = parent_stats.get_fd(io_rq.fd_out).fd_type proc_stats.update_io_stats(io_rq, fd_types) parent_stats.update_fd_stats(io_rq) # Check if the proc stats comm corresponds to the actual # process comm. It might be that it was missing so far. if proc_stats.comm != proc.comm: proc_stats.comm = proc.comm if parent_stats.comm != parent_proc.comm: parent_stats.comm = parent_proc.comm def _process_create_parent_proc(self, period_data, **kwargs): proc = kwargs['proc'] parent_proc = kwargs['parent_proc'] if proc.tid not in period_data.tids: period_data.tids[proc.tid] = ProcessIOStats.new_from_process(proc) if parent_proc.tid not in period_data.tids: period_data.tids[parent_proc.tid] = ( ProcessIOStats.new_from_process(parent_proc)) proc_stats = period_data.tids[proc.tid] parent_stats = period_data.tids[parent_proc.tid] proc_stats.pid = parent_stats.tid IoAnalysis._assign_fds_to_parent(proc_stats, parent_stats) def _process_statedump_block(self, period_data, **kwargs): dev = kwargs['dev'] diskname = kwargs['diskname'] if dev not in period_data.disks: period_data.disks[dev] = DiskStats(dev, diskname) else: period_data.disks[dev].diskname = diskname def _process_create_fd(self, period_data, **kwargs): timestamp = kwargs['timestamp'] parent_proc = kwargs['parent_proc'] tid = parent_proc.tid fd = kwargs['fd'] if tid not in period_data.tids: period_data.tids[tid] = ProcessIOStats.new_from_process( parent_proc) parent_stats = period_data.tids[tid] if fd not in parent_stats.fds: parent_stats.fds[fd] = [] parent_stats.fds[fd].append(FDStats.new_from_fd(parent_proc.fds[fd], timestamp)) def _process_close_fd(self, period_data, **kwargs): timestamp = kwargs['timestamp'] parent_proc = kwargs['parent_proc'] tid = parent_proc.tid fd = kwargs['fd'] if tid not in period_data.tids: return parent_stats = period_data.tids[tid] last_fd = parent_stats.get_fd(fd) if last_fd is None: return last_fd.close_ts = timestamp def _process_update_fd(self, period_data, **kwargs): timestamp = kwargs['timestamp'] parent_proc = kwargs['parent_proc'] tid = parent_proc.tid fd = kwargs['fd'] if fd not in parent_proc.fds: return if fd not in period_data.tids[tid].fds: period_data.tids[tid].fds[fd] = [] period_data.tids[tid].fds[fd].append( FDStats.new_from_fd(parent_proc.fds[fd], timestamp)) new_filename = parent_proc.fds[fd].filename fd_list = period_data.tids[tid].fds[fd] fd_list[-1].filename = new_filename class DiskStats(): MINORBITS = 20 MINORMASK = ((1 << MINORBITS) - 1) def __init__(self, dev, diskname=None): self.dev = dev if diskname is not None: self.diskname = diskname else: self.diskname = DiskStats._get_name_from_dev(dev) self.min_rq_duration = None self.max_rq_duration = None self.total_rq_sectors = 0 self.total_rq_duration = 0 self.rq_list = [] @classmethod def new_from_disk(cls, disk): return cls(disk.dev, disk.diskname) @property def rq_count(self): return len(self.rq_list) def update_stats(self, req): if self.min_rq_duration is None or req.duration < self.min_rq_duration: self.min_rq_duration = req.duration if self.max_rq_duration is None or req.duration > self.max_rq_duration: self.max_rq_duration = req.duration self.total_rq_sectors += req.nr_sector self.total_rq_duration += req.duration self.rq_list.append(req) def reset(self): self.min_rq_duration = None self.max_rq_duration = None self.total_rq_sectors = 0 self.total_rq_duration = 0 self.rq_list = [] @staticmethod def _get_name_from_dev(dev): # imported from include/linux/kdev_t.h major = dev >> DiskStats.MINORBITS minor = dev & DiskStats.MINORMASK return '(%d,%d)' % (major, minor) class IfaceStats(): def __init__(self, name): self.name = name self.recv_bytes = 0 self.recv_packets = 0 self.sent_bytes = 0 self.sent_packets = 0 def reset(self): self.recv_bytes = 0 self.recv_packets = 0 self.sent_bytes = 0 self.sent_packets = 0 class ProcessIOStats(stats.Process): def __init__(self, pid, tid, comm): super().__init__(pid, tid, comm) self.disk_io = stats.IO() self.net_io = stats.IO() self.unk_io = stats.IO() self.block_io = stats.IO() # FDStats objects, indexed by fd (fileno) self.fds = {} self.rq_list = [] @classmethod def new_from_process(cls, proc): return cls(proc.pid, proc.tid, proc.comm) # Total read/write does not account for block layer I/O @property def total_read(self): return self.disk_io.read + self.net_io.read + self.unk_io.read @property def total_write(self): return self.disk_io.write + self.net_io.write + self.unk_io.write def update_fd_stats(self, req): if req.errno is not None: return if req.fd is None or self.get_fd(req.fd) is None: return self.get_fd(req.fd).update_stats(req) if isinstance(req, sv.ReadWriteIORequest): if req.fd_in is not None: self.get_fd(req.fd_in).update_stats(req) if req.fd_out is not None: self.get_fd(req.fd_out).update_stats(req) def update_block_stats(self, req): self.rq_list.append(req) if req.operation is sv.IORequest.OP_READ: self.block_io.read += req.size elif req.operation is sv.IORequest.OP_WRITE: self.block_io.write += req.size def update_io_stats(self, req, fd_types): self.rq_list.append(req) if req.size is None or req.errno is not None: return if req.operation is sv.IORequest.OP_READ: self._update_read(req.returned_size, fd_types['fd']) elif req.operation is sv.IORequest.OP_WRITE: self._update_write(req.returned_size, fd_types['fd']) elif req.operation is sv.IORequest.OP_READ_WRITE: self._update_read(req.returned_size, fd_types['fd_in']) self._update_write(req.returned_size, fd_types['fd_out']) def _update_read(self, size, fd_type): if fd_type == sv.FDType.disk: self.disk_io.read += size elif fd_type == sv.FDType.net or fd_type == sv.FDType.maybe_net: self.net_io.read += size else: self.unk_io.read += size def _update_write(self, size, fd_type): if fd_type == sv.FDType.disk: self.disk_io.write += size elif fd_type == sv.FDType.net or fd_type == sv.FDType.maybe_net: self.net_io.write += size else: self.unk_io.write += size def _get_current_fd(self, fd): fd_stats = self.fds[fd][-1] if fd_stats.close_ts is not None: return None return fd_stats @staticmethod def _get_fd_by_timestamp(fd_list, timestamp): """Return the FDStats object whose lifetime contains timestamp. This method performs a recursive binary search on the given fd_list argument, and will find the FDStats object for which the timestamp is contained between its open_ts and close_ts attributes. Args: fd_list (list): list of FDStats object, sorted chronologically by open_ts. timestamp (int): timestamp in nanoseconds (ns) since unix epoch which should be contained in the FD's lifetime. Returns: The FDStats object whose lifetime contains the given timestamp, None if no such object exists. """ list_size = len(fd_list) if list_size == 0: return None midpoint = list_size // 2 fd_stats = fd_list[midpoint] # Handle case of currently open fd (i.e. no close_ts) if fd_stats.close_ts is None: if timestamp >= fd_stats.open_ts: return fd_stats else: if fd_stats.open_ts <= timestamp <= fd_stats.close_ts: return fd_stats else: if timestamp < fd_stats.open_ts: return ProcessIOStats._get_fd_by_timestamp( fd_list[:midpoint], timestamp) else: return ProcessIOStats._get_fd_by_timestamp( fd_list[midpoint + 1:], timestamp) def get_fd(self, fd, timestamp=None): if fd not in self.fds or not self.fds[fd]: return None if timestamp is None: fd_stats = self._get_current_fd(fd) else: fd_stats = ProcessIOStats._get_fd_by_timestamp(self.fds[fd], timestamp) return fd_stats def reset(self): self.disk_io.reset() self.net_io.reset() self.unk_io.reset() self.block_io.reset() self.rq_list = [] for fd in self.fds: fd_stats = self.get_fd(fd) if fd_stats is not None: fd_stats.reset() class FDStats(): def __init__(self, fd, filename, fd_type, cloexec, family, open_ts): self.fd = fd self.filename = filename self.fd_type = fd_type self.cloexec = cloexec self.family = family self.open_ts = open_ts self.close_ts = None self.io = stats.IO() # IO Requests that acted upon the FD self.rq_list = [] @classmethod def new_from_fd(cls, fd, open_ts): return cls(fd.fd, fd.filename, fd.fd_type, fd.cloexec, fd.family, open_ts) def update_stats(self, req): if req.operation is sv.IORequest.OP_READ: self.io.read += req.returned_size elif req.operation is sv.IORequest.OP_WRITE: self.io.write += req.returned_size elif req.operation is sv.IORequest.OP_READ_WRITE: if self.fd == req.fd_in: self.io.read += req.returned_size elif self.fd == req.fd_out: self.io.write += req.returned_size self.rq_list.append(req) def reset(self): self.io.reset() self.rq_list = [] class FileStats(): GENERIC_NAMES = ['pipe', 'socket', 'anon_inode', 'unknown'] def __init__(self, filename): self.filename = filename self.io = stats.IO() # Dict of file descriptors representing this file, indexed by # parent pid # FIXME this doesn't cover FD reuse cases self.fd_by_pid = {} def update_stats(self, fd_stats, proc_stats): self.io += fd_stats.io if proc_stats.pid is not None: pid = proc_stats.pid else: pid = proc_stats.tid if pid not in self.fd_by_pid: self.fd_by_pid[pid] = fd_stats.fd def reset(self): self.io.reset() @staticmethod def is_generic_name(filename): for generic_name in FileStats.GENERIC_NAMES: if filename.startswith(generic_name): return True return False lttnganalyses-0.6.1/lttnganalyses/core/memtop.py0000664000175000017500000000532512746220524023562 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import stats from .analysis import Analysis, PeriodData class _PeriodData(PeriodData): def __init__(self): self.tids = {} class Memtop(Analysis): def __init__(self, state, conf): notification_cbs = { 'tid_page_alloc': self._process_tid_page_alloc, 'tid_page_free': self._process_tid_page_free } super().__init__(state, conf, notification_cbs) def _create_period_data(self): return _PeriodData() def _process_tid_page_alloc(self, period_data, **kwargs): cpu_id = kwargs['cpu_id'] proc = kwargs['proc'] if not self._filter_process(proc): return if not self._filter_cpu(cpu_id): return tid = proc.tid if tid not in period_data.tids: period_data.tids[tid] = ProcessMemStats.new_from_process(proc) period_data.tids[tid].allocated_pages += 1 def _process_tid_page_free(self, period_data, **kwargs): cpu_id = kwargs['cpu_id'] proc = kwargs['proc'] if not self._filter_process(proc): return if not self._filter_cpu(cpu_id): return tid = proc.tid if tid not in period_data.tids: period_data.tids[tid] = ProcessMemStats.new_from_process(proc) period_data.tids[tid].freed_pages += 1 class ProcessMemStats(stats.Process): def __init__(self, pid, tid, comm): super().__init__(pid, tid, comm) self.allocated_pages = 0 self.freed_pages = 0 def reset(self): self.allocated_pages = 0 self.freed_pages = 0 lttnganalyses-0.6.1/lttnganalyses/core/cputop.py0000664000175000017500000001717012775773625023615 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import stats from .analysis import Analysis, PeriodData class _PeriodData(PeriodData): def __init__(self): self.period_begin_ts = None self.cpus = {} self.tids = {} class Cputop(Analysis): def __init__(self, state, conf): notification_cbs = { 'sched_migrate_task': self._process_sched_migrate_task, 'sched_switch_per_cpu': self._process_sched_switch_per_cpu, 'sched_switch_per_tid': self._process_sched_switch_per_tid, 'prio_changed': self._process_prio_changed, } super().__init__(state, conf, notification_cbs) def _create_period_data(self): return _PeriodData() def _begin_period_cb(self, period_data): period = period_data.period period_data.period_begin_ts = period.begin_evt.timestamp def _end_period_cb(self, period_data, completed, begin_captures, end_captures): self._compute_stats(period_data) def _compute_stats(self, period_data): """Compute usage stats relative to a certain time range For each CPU and process tracked by the analysis, we set its usage_percent attribute, which represents the percentage of usage time for the given CPU or process relative to the full duration of the time range. Do note that we need to know the timestamps and not just the duration, because if a CPU or a process is currently busy, we use the end timestamp to add the partial results of the currently running task to the usage stats. """ duration = self.last_event_ts - period_data.period.begin_evt.timestamp for cpu_id in period_data.cpus: cpu = period_data.cpus[cpu_id] if cpu.current_task_start_ts is not None: cpu.total_usage_time += self.last_event_ts - \ cpu.current_task_start_ts cpu.compute_stats(duration) for tid in period_data.tids: proc = period_data.tids[tid] if proc.last_sched_ts is not None: proc.total_cpu_time += self.last_event_ts - \ proc.last_sched_ts proc.compute_stats(duration) def _process_sched_switch_per_cpu(self, period_data, **kwargs): timestamp = kwargs['timestamp'] cpu_id = kwargs['cpu_id'] wakee_proc = kwargs['wakee_proc'] if not self._filter_cpu(cpu_id): return if cpu_id not in period_data.cpus: period_data.cpus[cpu_id] = CpuUsageStats(cpu_id) period_data.cpus[cpu_id].current_task_start_ts = \ period_data.period_begin_ts cpu = period_data.cpus[cpu_id] if cpu.current_task_start_ts is not None: cpu.total_usage_time += timestamp - cpu.current_task_start_ts if not self._filter_process(wakee_proc): cpu.current_task_start_ts = None else: cpu.current_task_start_ts = timestamp def _process_sched_switch_per_tid(self, period_data, **kwargs): cpu_id = kwargs['cpu_id'] wakee_proc = kwargs['wakee_proc'] timestamp = kwargs['timestamp'] prev_tid = kwargs['prev_tid'] next_tid = kwargs['next_tid'] next_comm = kwargs['next_comm'] prev_comm = kwargs['prev_comm'] if not self._filter_cpu(cpu_id): return if prev_tid not in period_data.tids: period_data.tids[prev_tid] = ProcessCpuStats( None, prev_tid, prev_comm) prev_proc = period_data.tids[prev_tid] # Set the last_sched_ts to the beginning of the period # since we missed the entry event. prev_proc.last_sched_ts = period_data.period_begin_ts prev_proc = period_data.tids[prev_tid] if prev_proc.last_sched_ts is not None: prev_proc.total_cpu_time += timestamp - prev_proc.last_sched_ts prev_proc.last_sched_ts = None # Only filter on wakee_proc after finalizing the prev_proc # accounting if not self._filter_process(wakee_proc): return if next_tid not in period_data.tids: period_data.tids[next_tid] = ProcessCpuStats(None, next_tid, next_comm) period_data.tids[next_tid].update_prio(timestamp, wakee_proc.prio) next_proc = period_data.tids[next_tid] next_proc.last_sched_ts = timestamp def _process_sched_migrate_task(self, period_data, **kwargs): cpu_id = kwargs['cpu_id'] proc = kwargs['proc'] tid = proc.tid if not self._filter_process(proc): return if not self._filter_cpu(cpu_id): return if tid not in period_data.tids: period_data.tids[tid] = ProcessCpuStats.new_from_process(proc) period_data.tids[tid].migrate_count += 1 def _process_prio_changed(self, period_data, **kwargs): timestamp = kwargs['timestamp'] prio = kwargs['prio'] tid = kwargs['tid'] if tid not in period_data.tids: return period_data.tids[tid].update_prio(timestamp, prio) def _filter_process(self, proc): # Exclude swapper if proc.tid == 0: return False return super()._filter_process(proc) class CpuUsageStats(): def __init__(self, cpu_id): self.cpu_id = cpu_id # Usage time and start timestamp are in nanoseconds (ns) self.total_usage_time = 0 self.current_task_start_ts = None self.usage_percent = None def compute_stats(self, duration): if duration != 0: self.usage_percent = self.total_usage_time * 100 / duration else: self.usage_percent = 0 def reset(self): self.total_usage_time = 0 self.usage_percent = None class ProcessCpuStats(stats.Process): def __init__(self, pid, tid, comm): super().__init__(pid, tid, comm) # CPU Time and timestamp in nanoseconds (ns) self.total_cpu_time = 0 self.last_sched_ts = None self.migrate_count = 0 self.usage_percent = None def compute_stats(self, duration): if duration != 0: self.usage_percent = self.total_cpu_time * 100 / duration else: self.usage_percent = 0 def reset(self): super().reset() self.total_cpu_time = 0 self.migrate_count = 0 self.usage_percent = None lttnganalyses-0.6.1/lttnganalyses/core/event.py0000664000175000017500000000763712745737273023427 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import babeltrace as bt import collections _CTF_SCOPES = ( bt.CTFScope.EVENT_FIELDS, bt.CTFScope.EVENT_CONTEXT, bt.CTFScope.STREAM_EVENT_CONTEXT, bt.CTFScope.STREAM_EVENT_HEADER, bt.CTFScope.STREAM_PACKET_CONTEXT, bt.CTFScope.TRACE_PACKET_HEADER, ) # This class has an interface which is compatible with the # babeltrace.reader.Event class. This is the result of a deep copy # performed by LTTng analyses. class Event(collections.Mapping): def __init__(self, bt_ev): self._copy_bt_event(bt_ev) def _copy_bt_event(self, bt_ev): self._name = bt_ev.name self._cycles = bt_ev.cycles self._timestamp = bt_ev.timestamp self._fields = {} for scope in _CTF_SCOPES: self._fields[scope] = {} for field_name in bt_ev.field_list_with_scope(scope): field_value = bt_ev.field_with_scope(field_name, scope) self._fields[scope][field_name] = field_value @property def name(self): return self._name @property def cycles(self): return self._cycles @property def timestamp(self): return self._timestamp @property def handle(self): raise NotImplementedError() @property def trace_collection(self): raise NotImplementedError() def _get_first_field(self, field_name): for scope_fields in self._fields.values(): if field_name in scope_fields: return scope_fields[field_name] def field_with_scope(self, field_name, scope): if scope not in self._fields: raise ValueError('Invalid scope provided') if field_name in self._fields[scope]: return self._fields[scope][field_name] def field_list_with_scope(self, scope): if scope not in self._fields: raise ValueError('Invalid scope provided') return list(self._fields[scope].keys()) def __getitem__(self, field_name): field = self._get_first_field(field_name) if field is None: raise KeyError(field_name) return field def __iter__(self): for key in self.keys(): yield key def __len__(self): count = 0 for scope_fields in self._fields.values(): count += len(scope_fields) return count def __contains__(self, field_name): return self._get_first_field(field_name) is not None def keys(self): keys = [] for scope_fields in self._fields.values(): keys += list(scope_fields.keys()) return keys def get(self, field_name, default=None): field = self._get_first_field(field_name) if field is None: return default return field def items(self): raise NotImplementedError() lttnganalyses-0.6.1/lttnganalyses/core/syscalls.py0000664000175000017500000000633412745737273024134 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import stats from .analysis import Analysis, PeriodData class _PeriodData(PeriodData): def __init__(self): self.tids = {} self.total_syscalls = 0 class SyscallsAnalysis(Analysis): def __init__(self, state, conf): notification_cbs = { 'syscall_exit': self._process_syscall_exit } super().__init__(state, conf, notification_cbs) def _create_period_data(self): return _PeriodData() def _process_syscall_exit(self, period_data, **kwargs): cpu_id = kwargs['cpu_id'] proc = kwargs['proc'] tid = proc.tid current_syscall = proc.current_syscall name = current_syscall.name if not self._filter_process(proc): return if not self._filter_cpu(cpu_id): return if tid not in period_data.tids: period_data.tids[tid] = ProcessSyscallStats.new_from_process(proc) proc_stats = period_data.tids[tid] if name not in proc_stats.syscalls: proc_stats.syscalls[name] = SyscallStats(name) proc_stats.syscalls[name].update_stats(current_syscall) proc_stats.total_syscalls += 1 period_data.total_syscalls += 1 class ProcessSyscallStats(stats.Process): def __init__(self, pid, tid, comm): super().__init__(pid, tid, comm) # indexed by syscall name self.syscalls = {} self.total_syscalls = 0 def reset(self): pass class SyscallStats(): def __init__(self, name): self.name = name self.min_duration = None self.max_duration = None self.total_duration = 0 self.syscalls_list = [] @property def count(self): return len(self.syscalls_list) def update_stats(self, syscall): duration = syscall.duration if self.min_duration is None or self.min_duration > duration: self.min_duration = duration if self.max_duration is None or self.max_duration < duration: self.max_duration = duration self.total_duration += duration self.syscalls_list.append(syscall) lttnganalyses-0.6.1/lttnganalyses/core/__init__.py0000664000175000017500000000217512665072151024021 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. lttnganalyses-0.6.1/lttnganalyses/common/0000775000175000017500000000000013033742625022243 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/lttnganalyses/common/trace_utils.py0000664000175000017500000001421212745737273025147 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import time import datetime import subprocess import sys from .version_utils import Version from .time_utils import NSEC_PER_SEC BT_INTERSECT_VERSION = Version(1, 4, 0) def is_multi_day_trace_collection_bt_1_3_2(collection, handles=None): """is_multi_day_trace_collection for BT < 1.3.3. Args: collection (TraceCollection): a babeltrace TraceCollection instance. handles (TraceHandle): a babeltrace TraceHandle instance. Returns: True if the trace collection spans more than one day, False otherwise. """ time_begin = None for handle in handles.values(): if time_begin is None: time_begin = time.localtime(handle.timestamp_begin / NSEC_PER_SEC) year_begin = time_begin.tm_year month_begin = time_begin.tm_mon day_begin = time_begin.tm_mday time_end = time.localtime(handle.timestamp_end / NSEC_PER_SEC) year_end = time_end.tm_year month_end = time_end.tm_mon day_end = time_end.tm_mday if year_begin != year_end: return True elif month_begin != month_end: return True elif day_begin != day_end: return True return False def is_multi_day_trace_collection(collection, handles=None): """Check whether a trace collection spans more than one day. Args: collection (TraceCollection): a babeltrace TraceCollection instance. handles (TraceHandle): a babeltrace TraceHandle instance. Returns: True if the trace collection spans more than one day, False otherwise. """ # Circumvent a bug in Babeltrace < 1.3.3 if collection.timestamp_begin is None or \ collection.timestamp_end is None: return is_multi_day_trace_collection_bt_1_3_2(collection, handles) date_begin = datetime.date.fromtimestamp( collection.timestamp_begin // NSEC_PER_SEC ) date_end = datetime.date.fromtimestamp( collection.timestamp_end // NSEC_PER_SEC ) return date_begin != date_end def get_trace_collection_date(collection, handles=None): """Get a trace collection's date. Args: collection (TraceCollection): a babeltrace TraceCollection instance. handles (TraceHandle): a babeltrace TraceHandle instance. Returns: A datetime.date object corresponding to the date at which the trace collection was recorded. handles (TraceHandle): a babeltrace TraceHandle instance. Raises: ValueError: if the trace collection spans more than one day. """ if is_multi_day_trace_collection(collection, handles): raise ValueError('Trace collection spans multiple days') trace_date = datetime.date.fromtimestamp( collection.timestamp_begin // NSEC_PER_SEC ) return trace_date def get_syscall_name(event): """Get the name of a syscall from an event. Args: event (Event): an instance of a babeltrace Event for a syscall entry. Returns: The name of the syscall, stripped of any superfluous prefix. Raises: ValueError: if the event is not a syscall event. """ name = event.name if name.startswith('sys_'): return name[4:] elif name.startswith('syscall_entry_'): return name[14:] else: raise ValueError('Not a syscall event') def read_babeltrace_version(): try: output = subprocess.check_output('babeltrace') except subprocess.CalledProcessError: raise ValueError('Could not run babeltrace to verify version') output = output.decode(sys.stdout.encoding) first_line = output.splitlines()[0] version_string = first_line.split()[-1] return Version.new_from_string(version_string) def check_field_exists(handles, ev_name, field_name): """Validate that a field exists in the metadata. Args: handles (TraceHandle): an array of babeltrace TraceHandle instance. ev_name (String): the event name in which the field must exist. field_name (String): the field that we are looking for. Returns: True if the field is found in the event, False if the field is not found in the event, or if the event is not found. """ for handle in handles.values(): for event in handle.events: if event.name == ev_name: for field in event.fields: if field.name == field_name: return True return False def check_event_exists(handles, name): """Validate that an event exists in the metadata. Args: handles (TraceHandle): an array of babeltrace TraceHandle instance. name (String): the event name in which the field must exist. Returns: True if the event is found in the metadata, False otherwise. """ for handle in handles.values(): for event in handle.events: if event.name == name: return True return False lttnganalyses-0.6.1/lttnganalyses/common/version_utils.py0000664000175000017500000000477312723101501025520 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import re from functools import total_ordering @total_ordering class Version: def __init__(self, major, minor, patch, extra=None): self.major = major self.minor = minor self.patch = patch self.extra = extra def __lt__(self, other): if self.major < other.major: return True if self.major > other.major: return False if self.minor < other.minor: return True if self.minor > other.minor: return False return self.patch < other.patch def __eq__(self, other): return ( self.major == other.major and self.minor == other.minor and self.patch == other.patch ) def __repr__(self): version_str = '{}.{}.{}'.format(self.major, self.minor, self.patch) if self.extra: version_str += self.extra return version_str @classmethod def new_from_string(cls, string): version_match = re.match(r'(\d+)\.(\d+)\.(\d+)(.*)', string) if version_match is None: major = minor = patch = 0 extra = '+unknown' else: major = int(version_match.group(1)) minor = int(version_match.group(2)) patch = int(version_match.group(3)) extra = version_match.group(4) return cls(major, minor, patch, extra) lttnganalyses-0.6.1/lttnganalyses/common/parse_utils.py0000664000175000017500000003363412725616022025156 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import datetime import re from time import timezone from . import trace_utils from .time_utils import NSEC_PER_SEC def _split_value_units(raw_str): """Take a string with a numerical value and units, and separate the two. Args: raw_str (str): the string to parse, with numerical value and (optionally) units. Returns: A tuple (value, units), where value is a string and units is either a string or `None` if no units were found. """ try: units_index = next(i for i, c in enumerate(raw_str) if c.isalpha()) except StopIteration: # no units found return (raw_str, None) return (raw_str[:units_index], raw_str[units_index:]) def parse_size(size_str): """Convert a human-readable size string to an integral number of bytes. Args: size_str (str): the formatted string comprised of the size and units. Returns: A number of bytes. Raises: ValueError: if units are unrecognised or the size is not a real number. """ binary_units = ['B', 'KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB'] # units as printed by GNU coreutils (e.g. ls or du), using base # 1024 as well coreutils_units = ['B', 'K', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y'] si_units = ['B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'] size, units = _split_value_units(size_str) try: size = float(size) except ValueError: raise ValueError('invalid size: {}'.format(size)) # If no units have been found, assume bytes if units is not None: if units in binary_units: base = 1024 exponent = binary_units.index(units) elif units in coreutils_units: base = 1024 exponent = coreutils_units.index(units) elif units in si_units: base = 1000 exponent = si_units.index(units) else: raise ValueError('unrecognised units: {}'.format(units)) size *= base ** exponent return int(size) def parse_duration(duration_str): """Convert a human-readable duration string to an integral number of nanoseconds. Args: duration_str (str): the formatted string comprised of the duration and units. Returns: A number of nanoseconds. Raises: ValueError: if units are unrecognised or the size is not a real number. """ base = 1000 duration, units = _split_value_units(duration_str) try: duration = float(duration) except ValueError: raise ValueError('invalid duration: {}'.format(duration)) if units is not None: if units == 's': exponent = 3 elif units == 'ms': exponent = 2 elif units in ['us', 'µs']: exponent = 1 elif units == 'ns': exponent = 0 else: raise ValueError('unrecognised units: {}'.format(units)) else: # no units defaults to seconds exponent = 3 duration *= base ** exponent return int(duration) def _parse_date_full_with_nsec(date): """Parse full date string with nanosecond resolution. This matches either 2014-12-12 17:29:43.802588035 or 2014-12-12T17:29:43.802588035. Args: date (str): the date string to be parsed. Returns: A tuple of the format (date_time, nsec), where date_time is a datetime.datetime object and nsec is an int of the remaining nanoseconds. Raises: ValueError: if the date format does not match. """ pattern = re.compile( r'^(?P\d{4})-(?P[01]\d)-(?P[0-3]\d)[\sTt]' r'(?P\d{2}):(?P\d{2}):(?P\d{2})\.(?P\d{9})$' ) if not pattern.match(date): raise ValueError('Wrong date format: {}'.format(date)) year = pattern.search(date).group('year') month = pattern.search(date).group('mon') day = pattern.search(date).group('day') hour = pattern.search(date).group('hour') minute = pattern.search(date).group('min') sec = pattern.search(date).group('sec') nsec = pattern.search(date).group('nsec') date_time = datetime.datetime( int(year), int(month), int(day), int(hour), int(minute), int(sec) ) return date_time, int(nsec) def _parse_date_full(date): """Parse full date string. This matches either 2014-12-12 17:29:43 or 2014-12-12T17:29:43. Args: date (str): the date string to be parsed. Returns: A tuple of the format (date_time, nsec), where date_time is a datetime.datetime object and nsec is 0. Raises: ValueError: if the date format does not match. """ pattern = re.compile( r'^(?P\d{4})-(?P[01]\d)-(?P[0-3]\d)[\sTt]' r'(?P\d{2}):(?P\d{2}):(?P\d{2})$' ) if not pattern.match(date): raise ValueError('Wrong date format: {}'.format(date)) year = pattern.search(date).group('year') month = pattern.search(date).group('mon') day = pattern.search(date).group('day') hour = pattern.search(date).group('hour') minute = pattern.search(date).group('min') sec = pattern.search(date).group('sec') nsec = 0 date_time = datetime.datetime( int(year), int(month), int(day), int(hour), int(minute), int(sec) ) return date_time, nsec def _parse_date_time_with_nsec(date): """Parse time string with nanosecond resolution. This matches 17:29:43.802588035. Args: date (str): the date string to be parsed. Returns: A tuple of the format (date_time, nsec), where date_time is a datetime.time object and nsec is an int of the remaining nanoseconds. Raises: ValueError: if the date format does not match. """ pattern = re.compile( r'^(?P\d{2}):(?P\d{2}):(?P\d{2})\.(?P\d{9})$' ) if not pattern.match(date): raise ValueError('Wrong date format: {}'.format(date)) hour = pattern.search(date).group('hour') minute = pattern.search(date).group('min') sec = pattern.search(date).group('sec') nsec = pattern.search(date).group('nsec') time = datetime.time(int(hour), int(minute), int(sec)) return time, int(nsec) def _parse_date_time(date): """Parse time string. This matches 17:29:43. Args: date (str): the date string to be parsed. Returns: A tuple of the format (date_time, nsec), where date_time is a datetime.time object and nsec is 0. Raises: ValueError: if the date format does not match. """ pattern = re.compile( r'^(?P\d{2}):(?P\d{2}):(?P\d{2})$' ) if not pattern.match(date): raise ValueError('Wrong date format: {}'.format(date)) hour = pattern.search(date).group('hour') minute = pattern.search(date).group('min') sec = pattern.search(date).group('sec') nsec = 0 time = datetime.time(int(hour), int(minute), int(sec)) return time, nsec def _parse_date_timestamp(date): """Parse timestamp string in nanoseconds from epoch. This matches 1418423383802588035. Args: date (str): the date string to be parsed. Returns: A tuple of the format (date_time, nsec), where date_time is a datetime.datetime object and nsec is an int of the remaining nanoseconds. Raises: ValueError: if the date format does not match. """ pattern = re.compile(r'^\d+$') if not pattern.match(date): raise ValueError('Wrong date format: {}'.format(date)) timestamp_ns = int(date) date_time = datetime.datetime.fromtimestamp( timestamp_ns // NSEC_PER_SEC ) # Set the microseconds to 0 because values < 1 second are covered # by the nsec value. date_time = date_time.replace(microsecond=0) nsec = timestamp_ns % NSEC_PER_SEC return date_time, nsec def parse_date(date): """Try to parse a date string from one of many formats. Args: date (str): the date string to be parsed. Returns: A tuple of the format (date_time, nsec), where date_time is one of either datetime.datetime or datetime.time, depending on whether the date string contains full date information or only the time of day. The latter case can still be useful when used in conjuction with a trace collection's date to provide the missing information. The nsec element of the tuple is an int and corresponds to the nanoseconds for the given date/timestamp. This is due to datetime objects only supporting a resolution down to the microsecond. Raises: ValueError: if the date does not correspond to any of the supported formats. """ parsers = [ _parse_date_full_with_nsec, _parse_date_full, _parse_date_time_with_nsec, _parse_date_time, _parse_date_timestamp ] date_time = None nsec = None for parser in parsers: try: (date_time, nsec) = parser(date) except ValueError: continue # If no exception was raised, the parser found a match, so # stop iterating break if date_time is None or nsec is None: # None of the parsers were a match raise ValueError('Unrecognised date format: {}'.format(date)) return date_time, nsec def parse_trace_collection_date(collection, date, gmt=False, handles=None): """Parse a date string, using a trace collection to disambiguate incomplete dates. Args: collection (TraceCollection): a babeltrace TraceCollection instance. handles (TraceHandle): a babeltrace TraceHandle instance. date (string): the date string to be parsed. gmt (bool, optional): flag indicating whether the timestamp is in the local timezone or gmt (default: False). Returns: A timestamp (int) in nanoseconds since epoch, corresponding to the parsed date. Raises: ValueError: if the date format is unrecognised, or if the date format does not specify the date and the trace collection spans multiple days. """ try: date_time, nsec = parse_date(date) except ValueError: # This might raise ValueError if the date is in an invalid # format, so just re-raise the exception to inform the caller # of the problem. raise # date_time will either be an actual datetime.datetime object, or # just a datetime.time object, depending on the format. In the # latter case, try and fill out the missing date information from # the trace collection's date. if isinstance(date_time, datetime.time): try: collection_date = trace_utils.get_trace_collection_date(collection, handles) except ValueError: raise ValueError( 'Invalid date format for multi-day trace: {}'.format(date) ) date_time = datetime.datetime.combine(collection_date, date_time) if gmt: date_time = date_time + datetime.timedelta(seconds=timezone) timestamp_ns = int(date_time.timestamp()) * NSEC_PER_SEC + nsec return timestamp_ns def parse_trace_collection_time_range(collection, time_range, gmt=False, handles=None): """Parse a time range string, using a trace collection to disambiguate incomplete dates. Args: collection (TraceCollection): a babeltrace TraceCollection instance. handles (TraceHandle): a babeltrace TraceHandle instance. time_range (string): the time range string to be parsed. gmt (bool, optional): flag indicating whether the timestamps are in the local timezone or gmt (default: False). Returns: A tuple (begin, end) of the two timestamps (int) in nanoseconds since epoch, corresponding to the parsed dates. Raises: ValueError: if the time range or date format is unrecognised, or if the date format does not specify the date and the trace collection spans multiple days. """ pattern = re.compile(r'^\[(?P.*),(?P.*)\]$') if not pattern.match(time_range): raise ValueError('Invalid time range format: {}'.format(time_range)) begin_str = pattern.search(time_range).group('begin').strip() end_str = pattern.search(time_range).group('end').strip() try: begin = parse_trace_collection_date(collection, begin_str, gmt, handles) end = parse_trace_collection_date(collection, end_str, gmt, handles) except ValueError: # Either of the dates was in the wrong format, propagate the # exception to the caller. raise return begin, end lttnganalyses-0.6.1/lttnganalyses/common/format_utils.py0000664000175000017500000001446412723101552025327 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import math import socket import struct import time from .time_utils import NSEC_PER_SEC def format_size(size, binary_prefix=True): """Convert an integral number of bytes to a human-readable string. Args: size (int): a non-negative number of bytes. binary_prefix (bool, optional): whether to use binary units prefixes, over SI prefixes (default: True). Returns: The formatted string comprised of the size and units. Raises: ValueError: if size < 0. """ if size < 0: raise ValueError('Cannot format negative size') if binary_prefix: base = 1024 units = [' B', 'KiB', 'MiB', 'GiB', 'TiB', 'PiB', 'EiB', 'ZiB', 'YiB'] else: base = 1000 units = [' B', 'KB', 'MB', 'GB', 'TB', 'PB', 'EB', 'ZB', 'YB'] if size == 0: exponent = 0 else: exponent = int(math.log(size, base)) if exponent >= len(units): # Don't try and use a unit above YiB/YB exponent = len(units) - 1 size /= base ** exponent unit = units[exponent] if exponent == 0: # Don't display fractions of a byte format_str = '{:0.0f} {}' else: format_str = '{:0.2f} {}' return format_str.format(size, unit) def format_prio_list(prio_list): """Format a list of prios into a string of unique prios with count. Args: prio_list (list): a list of PrioEvent objects. Returns: The formatted string containing the unique priorities and their count if they occurred more than once. """ prio_count = {} prio_str = None for prio_event in prio_list: prio = prio_event.prio if prio not in prio_count: prio_count[prio] = 0 prio_count[prio] += 1 for prio in sorted(prio_count.keys()): count = prio_count[prio] if count > 1: count_str = ' ({} times)'.format(count) else: count_str = '' if prio_str is None: prio_str = '[{}{}'.format(prio, count_str) else: prio_str += ', {}{}'.format(prio, count_str) if prio_str is None: prio_str = '[]' else: prio_str += ']' return prio_str def format_timestamp(timestamp, print_date=False, gmt=False): """Format a timestamp into a human-readable date string Args: timestamp (int): nanoseconds since epoch. print_date (bool, optional): flag indicating whether to print the full date or just the time of day (default: False). gmt (bool, optional): flag indicating whether the timestamp is in the local timezone or gmt (default: False). Returns: The formatted date string, containing either the full date or just the time of day. """ date_fmt = '{:04}-{:02}-{:02} ' time_fmt = '{:02}:{:02}:{:02}.{:09}' if gmt: date = time.gmtime(timestamp // NSEC_PER_SEC) else: date = time.localtime(timestamp // NSEC_PER_SEC) formatted_ts = time_fmt.format( date.tm_hour, date.tm_min, date.tm_sec, timestamp % NSEC_PER_SEC ) if print_date: date_str = date_fmt.format(date.tm_year, date.tm_mon, date.tm_mday) formatted_ts = date_str + formatted_ts return formatted_ts def format_time_range(begin_ts, end_ts, print_date=False, gmt=False): """Format a pair of timestamps into a human-readable date string. Args: begin_ts (int): nanoseconds since epoch to beginning of time range. end_ts (int): nanoseconds since epoch to end of time range. print_date (bool, optional): flag indicating whether to print the full date or just the time of day (default: False). gmt (bool, optional): flag indicating whether the timestamp is in the local timezone or gmt (default: False). Returns: The formatted dates string, containing either the full date or just the time of day, enclosed within square brackets and delimited by a comma. """ time_range_fmt = '[{}, {}]' begin_str = format_timestamp(begin_ts, print_date, gmt) end_str = format_timestamp(end_ts, print_date, gmt) return time_range_fmt.format(begin_str, end_str) def format_ipv4(ip, port=None): """Format an ipv4 address into a human-readable string. Args: ip (varies): the ip address as extracted in an LTTng event. Either an integer or a list of integers, depending on the tracer version. port (int, optional): the port number associated with the address. Returns: The formatted string containing the ipv4 address and, optionally, the port number. """ # depending on the version of lttng-modules, the v4addr is an # integer (< 2.6) or sequence (>= 2.6) try: ip_str = '{}.{}.{}.{}'.format(ip[0], ip[1], ip[2], ip[3]) except TypeError: # The format string '!I' tells pack to interpret ip as a # packed structure of network-endian 32-bit unsigned integers, # which inet_ntoa can then convert into the formatted string ip_str = socket.inet_ntoa(struct.pack('!I', ip)) if port is not None: ip_str += ':{}'.format(port) return ip_str lttnganalyses-0.6.1/lttnganalyses/common/time_utils.py0000664000175000017500000000222312723101501024755 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. NSEC_PER_SEC = 1000000000 lttnganalyses-0.6.1/lttnganalyses/common/__init__.py0000664000175000017500000000217012665072151024354 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. lttnganalyses-0.6.1/lttnganalyses/_version.py0000664000175000017500000000072713033742625023157 0ustar mjeansonmjeanson00000000000000 # This file was generated by 'versioneer.py' (0.15) from # revision-control system data, or from the parent directory name of an # unpacked source archive. Distribution tarballs contain a pre-generated copy # of this file. import json import sys version_json = ''' { "dirty": false, "error": null, "full-revisionid": "cbb1dacba18c1c581db32fc4c36bc16644be4b38", "version": "0.6.1" } ''' # END VERSION_JSON def get_versions(): return json.loads(version_json) lttnganalyses-0.6.1/lttnganalyses/cli/0000775000175000017500000000000013033742625021522 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/lttnganalyses/cli/irq.py0000664000175000017500000006335513033475105022677 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # 2015 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import itertools import math import statistics import sys from . import mi from . import termgraph from .command import Command from ..core import irq as core_irq from ..linuxautomaton import sv class IrqAnalysisCommand(Command): _DESC = """The irq command.""" _ANALYSIS_CLASS = core_irq.IrqAnalysis _MI_TITLE = 'System interrupt analysis' _MI_DESCRIPTION = 'Interrupt frequency distribution, statistics, and log' _MI_TAGS = [mi.Tags.INTERRUPT, mi.Tags.STATS, mi.Tags.FREQ, mi.Tags.LOG] _MI_TABLE_CLASS_LOG = 'log' _MI_TABLE_CLASS_HARD_STATS = 'hard-stats' _MI_TABLE_CLASS_SOFT_STATS = 'soft-stats' _MI_TABLE_CLASS_FREQ = 'freq' _MI_TABLE_CLASS_SUMMARY = 'summary' _MI_TABLE_CLASSES = [ ( _MI_TABLE_CLASS_LOG, 'Interrupt log', [ ('time_range', 'Time range', mi.TimeRange), ('raised_ts', 'Raised timestamp', mi.Timestamp), ('cpu', 'CPU', mi.Cpu), ('irq', 'Interrupt', mi.Irq), ] ), ( _MI_TABLE_CLASS_HARD_STATS, 'Hardware interrupt statistics', [ ('irq', 'Interrupt', mi.Irq), ('count', 'Interrupt count', mi.Number, 'interrupts'), ('min_duration', 'Minimum duration', mi.Duration), ('avg_duration', 'Average duration', mi.Duration), ('max_duration', 'Maximum duration', mi.Duration), ('stdev_duration', 'Interrupt duration standard deviation', mi.Duration), ] ), ( _MI_TABLE_CLASS_SOFT_STATS, 'Software interrupt statistics', [ ('irq', 'Interrupt', mi.Irq), ('count', 'Interrupt count', mi.Number, 'interrupts'), ('min_duration', 'Minimum duration', mi.Duration), ('avg_duration', 'Average duration', mi.Duration), ('max_duration', 'Maximum duration', mi.Duration), ('stdev_duration', 'Interrupt duration standard deviation', mi.Duration), ('raise_count', 'Interrupt raise count', mi.Number, 'interrupt raises'), ('min_latency', 'Minimum raise latency', mi.Duration), ('avg_latency', 'Average raise latency', mi.Duration), ('max_latency', 'Maximum raise latency', mi.Duration), ('stdev_latency', 'Interrupt raise latency standard deviation', mi.Duration), ] ), ( _MI_TABLE_CLASS_FREQ, 'Interrupt handler duration frequency distribution', [ ('duration_lower', 'Duration (lower bound)', mi.Duration), ('duration_upper', 'Duration (upper bound)', mi.Duration), ('count', 'Interrupt count', mi.Number, 'interrupts'), ] ), ( _MI_TABLE_CLASS_SUMMARY, 'Interrupt statistics - summary', [ ('time_range', 'Time range', mi.TimeRange), ('count', 'Total interrupt count', mi.Number, 'interrupts'), ] ), ] def _analysis_tick(self, period_data, end_ns): if period_data is None: return begin_ns = period_data.period.begin_evt.timestamp log_table = None hard_stats_table = None soft_stats_table = None freq_tables = None if self._args.log: log_table = self._get_log_result_table(period_data, begin_ns, end_ns) if self._args.stats or self._args.freq: hard_stats_table, soft_stats_table, freq_tables = \ self._get_stats_freq_result_tables(period_data, begin_ns, end_ns) if self._mi_mode: self._mi_append_result_table(log_table) self._mi_append_result_table(hard_stats_table) self._mi_append_result_table(soft_stats_table) if self._args.freq_series: freq_tables = [self._get_freq_series_table(freq_tables)] self._mi_append_result_tables(freq_tables) else: self._print_date(begin_ns, end_ns) if hard_stats_table or soft_stats_table or freq_tables: self._print_stats_freq(hard_stats_table, soft_stats_table, freq_tables) if log_table: print() if log_table: self._print_log(log_table) def _create_summary_result_tables(self): if not self._args.stats: self._mi_clear_result_tables() return hard_stats_tables = \ self._mi_get_result_tables(self._MI_TABLE_CLASS_HARD_STATS) soft_stats_tables = \ self._mi_get_result_tables(self._MI_TABLE_CLASS_SOFT_STATS) assert len(hard_stats_tables) == len(soft_stats_tables) begin = hard_stats_tables[0].timerange.begin.value end = hard_stats_tables[-1].timerange.end.value summary_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_SUMMARY, begin, end) for hs_table, ss_table in zip(hard_stats_tables, soft_stats_tables): assert hs_table.timerange == ss_table.timerange for row in itertools.chain(hs_table.rows, ss_table.rows): summary_table.append_row( time_range=hs_table.timerange, count=row.count, ) self._mi_clear_result_tables() self._mi_append_result_table(summary_table) def _get_log_result_table(self, period_data, begin_ns, end_ns): result_table = self._mi_create_result_table(self._MI_TABLE_CLASS_LOG, begin_ns, end_ns) for irq in period_data.irq_list: if not self._filter_irq(irq): continue if type(irq) is sv.HardIRQ: is_hard = True raised_ts_do = mi.Empty() name = period_data.hard_irq_stats[irq.id].name else: is_hard = False if irq.raise_ts is None: raised_ts_do = mi.Unknown() else: raised_ts_do = mi.Timestamp(irq.raise_ts) name = period_data.softirq_stats[irq.id].name result_table.append_row( time_range=mi.TimeRange(irq.begin_ts, irq.end_ts), raised_ts=raised_ts_do, cpu=mi.Cpu(irq.cpu_id), irq=mi.Irq(is_hard, irq.id, name), ) return result_table def _get_common_stats_result_table_row(self, is_hard, irq_nr, irq_stats): stdev = self._compute_duration_stdev(irq_stats) if math.isnan(stdev): stdev = mi.Unknown() else: stdev = mi.Duration(stdev) return ( mi.Irq(is_hard, irq_nr, irq_stats.name), mi.Number(irq_stats.count), mi.Duration(irq_stats.min_duration), mi.Duration(irq_stats.total_duration / irq_stats.count), mi.Duration(irq_stats.max_duration), stdev, ) def _append_hard_stats_result_table_row(self, irq_nr, irq_stats, hard_stats_table): common_row = self._get_common_stats_result_table_row(True, irq_nr, irq_stats) hard_stats_table.append_row( irq=common_row[0], count=common_row[1], min_duration=common_row[2], avg_duration=common_row[3], max_duration=common_row[4], stdev_duration=common_row[5], ) def _append_soft_stats_result_table_row(self, irq_nr, irq_stats, soft_stats_table): common_row = self._get_common_stats_result_table_row(False, irq_nr, irq_stats) if irq_stats.raise_count == 0: min_latency = mi.Unknown() avg_latency = mi.Unknown() max_latency = mi.Unknown() stdev_latency = mi.Unknown() else: min_latency = mi.Duration(irq_stats.min_raise_latency) avg_latency = irq_stats.total_raise_latency / irq_stats.raise_count avg_latency = mi.Duration(avg_latency) max_latency = mi.Duration(irq_stats.max_raise_latency) stdev = self._compute_raise_latency_stdev(irq_stats) if math.isnan(stdev): stdev_latency = mi.Unknown() else: stdev_latency = mi.Duration(stdev) soft_stats_table.append_row( irq=common_row[0], count=common_row[1], min_duration=common_row[2], avg_duration=common_row[3], max_duration=common_row[4], stdev_duration=common_row[5], raise_count=mi.Number(irq_stats.raise_count), min_latency=min_latency, avg_latency=avg_latency, max_latency=max_latency, stdev_latency=stdev_latency, ) def _fill_freq_result_table(self, period_data, irq_stats, freq_table): # The number of bins for the histogram resolution = self._args.freq_resolution if self._args.min is not None: min_duration = self._args.min else: min_duration = irq_stats.min_duration if self._args.max is not None: max_duration = self._args.max else: max_duration = irq_stats.max_duration # ns to µs min_duration /= 1000 max_duration /= 1000 # histogram's step if self._args.freq_uniform: # TODO: perform only one time durations = [irq.duration for irq in period_data.irq_list] min_duration, max_duration, step = \ self._find_uniform_freq_values(durations) else: step = (max_duration - min_duration) / resolution if step == 0: return buckets = [] counts = [] for i in range(resolution): buckets.append(i * step) counts.append(0) for irq in irq_stats.irq_list: duration = irq.duration / 1000 index = int((duration - min_duration) / step) if index >= resolution: # special case for max value: put in last bucket (includes # its upper bound) if duration == max_duration: counts[index - 1] += 1 continue counts[index] += 1 for index, count in enumerate(counts): lower_bound = index * step + min_duration upper_bound = (index + 1) * step + min_duration freq_table.append_row( duration_lower=mi.Duration.from_us(lower_bound), duration_upper=mi.Duration.from_us(upper_bound), count=mi.Number(count), ) def _fill_stats_freq_result_tables(self, period_data, begin_ns, end_ns, is_hard, analysis_stats, filter_list, hard_stats_table, soft_stats_table, freq_tables): for id in sorted(analysis_stats): if filter_list and str(id) not in filter_list: continue irq_stats = analysis_stats[id] if irq_stats.count == 0: continue if self._args.stats: if is_hard: append_row_fn = self._append_hard_stats_result_table_row table = hard_stats_table else: append_row_fn = self._append_soft_stats_result_table_row table = soft_stats_table append_row_fn(id, irq_stats, table) if self._args.freq: subtitle = '{} ({})'.format(irq_stats.name, id) freq_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin_ns, end_ns, subtitle) self._fill_freq_result_table(period_data, irq_stats, freq_table) # it is possible that the frequency distribution result # table is empty; we need to keep it any way because # there's a 1-to-1 association between the statistics # row indexes (if available) and the frequency table # indexes freq_tables.append(freq_table) def _get_freq_series_table(self, freq_tables): if not freq_tables: return column_infos = [ ('duration_lower', 'Duration (lower bound)', mi.Duration), ('duration_upper', 'Duration (upper bound)', mi.Duration), ] for index, freq_table in enumerate(freq_tables): column_infos.append(( 'irq{}'.format(index), freq_table.subtitle, mi.Number, 'interrupts' )) title = 'Interrupt handlers duration frequency distributions' table_class = mi.TableClass(None, title, column_infos) begin = freq_tables[0].timerange.begin.value end = freq_tables[0].timerange.end.value result_table = mi.ResultTable(table_class, begin, end) for row_index, freq0_row in enumerate(freq_tables[0].rows): row_tuple = [ freq0_row.duration_lower, freq0_row.duration_upper, ] for freq_table in freq_tables: freq_row = freq_table.rows[row_index] row_tuple.append(freq_row.count) result_table.append_row_tuple(tuple(row_tuple)) return result_table def _get_stats_freq_result_tables(self, period_data, begin_ns, end_ns): def fill_stats_freq_result_tables(period_data, is_hard, stats, filter_list): self._fill_stats_freq_result_tables(period_data, begin_ns, end_ns, is_hard, stats, filter_list, hard_stats_table, soft_stats_table, freq_tables) hard_stats_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_HARD_STATS, begin_ns, end_ns) soft_stats_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_SOFT_STATS, begin_ns, end_ns) freq_tables = [] if self._args.irq_filter_list is not None or \ self._args.softirq_filter_list is None: fill_stats_freq_result_tables(period_data, True, period_data.hard_irq_stats, self._args.irq_filter_list) if self._args.softirq_filter_list is not None or \ self._args.irq_filter_list is None: fill_stats_freq_result_tables(period_data, False, period_data.softirq_stats, self._args.softirq_filter_list) return hard_stats_table, soft_stats_table, freq_tables def _print_log(self, result_table): fmt = '[{:<18}, {:<18}] {:>15} {:>4} {:<9} {:>4} {:<22}' title_fmt = '{:<20} {:<19} {:>15} {:>4} {:<9} {:>4} {:<22}' print(title_fmt.format('Begin', 'End', 'Duration (us)', 'CPU', 'Type', '#', 'Name')) for row in result_table.rows: timerange = row.time_range begin_ts = timerange.begin.value end_ts = timerange.end.value if type(row.raised_ts) is mi.Timestamp: raised_ts = ' (raised at {})'.format( self._format_timestamp(row.raised_ts.value) ) else: raised_ts = '' cpu_id = row.cpu.id irq_do = row.irq if irq_do.is_hard: irqtype = 'IRQ' else: irqtype = 'SoftIRQ' print(fmt.format(self._format_timestamp(begin_ts), self._format_timestamp(end_ts), '%0.03f' % ((end_ts - begin_ts) / 1000), '%d' % cpu_id, irqtype, irq_do.nr, irq_do.name + raised_ts)) def _validate_transform_args(self): args = self._args args.irq_filter_list = None args.softirq_filter_list = None if args.irq: args.irq_filter_list = args.irq.split(',') if args.softirq: args.softirq_filter_list = args.softirq.split(',') def _compute_duration_stdev(self, irq_stats_item): if irq_stats_item.count < 2: return float('nan') durations = [] for irq in irq_stats_item.irq_list: durations.append(irq.end_ts - irq.begin_ts) return statistics.stdev(durations) def _compute_raise_latency_stdev(self, irq_stats_item): if irq_stats_item.raise_count < 2: return float('nan') raise_latencies = [] for irq in irq_stats_item.irq_list: if irq.raise_ts is None: continue raise_latencies.append(irq.begin_ts - irq.raise_ts) return statistics.stdev(raise_latencies) def _print_frequency_distribution(self, freq_table): title_fmt = 'Handler duration frequency distribution {}' graph = termgraph.FreqGraph( data=freq_table.rows, get_value=lambda row: row.count.value, get_lower_bound=lambda row: row.duration_lower.to_us(), title=title_fmt.format(freq_table.subtitle), unit='µs' ) graph.print_graph() def _filter_irq(self, irq): if type(irq) is sv.HardIRQ: if self._args.irq_filter_list: return str(irq.id) in self._args.irq_filter_list if self._args.softirq_filter_list: return False else: # SoftIRQ if self._args.softirq_filter_list: return str(irq.id) in self._args.softirq_filter_list if self._args.irq_filter_list: return False return True def _print_hard_irq_stats_row(self, row): output_str = self._get_duration_stats_str(row) print(output_str) def _print_soft_irq_stats_row(self, row): output_str = self._get_duration_stats_str(row) if row.raise_count.value != 0: output_str += self._get_raise_latency_str(row) print(output_str) def _get_duration_stats_str(self, row): format_str = '{:<3} {:<18} {:>5} {:>12} {:>12} {:>12} {:>12} {:<2}' irq_do = row.irq count = row.count.value min_duration = row.min_duration.to_us() avg_duration = row.avg_duration.to_us() max_duration = row.max_duration.to_us() if type(row.stdev_duration) is mi.Unknown: duration_stdev_str = '?' else: duration_stdev_str = '%0.03f' % row.stdev_duration.to_us() output_str = format_str.format('%d:' % irq_do.nr, '<%s>' % irq_do.name, '%d' % count, '%0.03f' % min_duration, '%0.03f' % avg_duration, '%0.03f' % max_duration, '%s' % duration_stdev_str, ' |') return output_str def _get_raise_latency_str(self, row): format_str = ' {:>6} {:>12} {:>12} {:>12} {:>12}' raise_count = row.raise_count.value min_raise_latency = row.min_latency.to_us() avg_raise_latency = row.avg_latency.to_us() max_raise_latency = row.max_latency.to_us() if type(row.stdev_latency) is mi.Unknown: raise_latency_stdev_str = '?' else: raise_latency_stdev_str = '%0.03f' % row.stdev_latency.to_us() output_str = format_str.format(raise_count, '%0.03f' % min_raise_latency, '%0.03f' % avg_raise_latency, '%0.03f' % max_raise_latency, '%s' % raise_latency_stdev_str) return output_str def _print_stats_freq(self, hard_stats_table, soft_stats_table, freq_tables): hard_header_format = '{:<52} {:<12}\n' \ '{:<22} {:<14} {:<12} {:<12} {:<10} {:<12}\n' hard_header = hard_header_format.format( 'Hard IRQ', 'Duration (us)', '', 'count', 'min', 'avg', 'max', 'stdev' ) hard_header += ('-' * 82 + '|') soft_header_format = '{:<52} {:<52} {:<12}\n' \ '{:<22} {:<14} {:<12} {:<12} {:<10} {:<4} ' \ '{:<3} {:<14} {:<12} {:<12} {:<10} {:<12}\n' soft_header = soft_header_format.format( 'Soft IRQ', 'Duration (us)', 'Raise latency (us)', '', 'count', 'min', 'avg', 'max', 'stdev', ' |', 'count', 'min', 'avg', 'max', 'stdev' ) soft_header += '-' * 82 + '|' + '-' * 60 if hard_stats_table.rows or soft_stats_table.rows: stats_rows = itertools.chain(hard_stats_table.rows, soft_stats_table.rows) if freq_tables: for stats_row, freq_table in zip(stats_rows, freq_tables): irq = stats_row.irq if irq.is_hard: print(hard_header) self._print_hard_irq_stats_row(stats_row) else: print(soft_header) self._print_soft_irq_stats_row(stats_row) # frequency table might be empty: do not print if freq_table.rows: print() self._print_frequency_distribution(freq_table) print() else: hard_header_printed = False soft_header_printed = False for stats_row in stats_rows: irq = stats_row.irq if irq.is_hard: if not hard_header_printed: print(hard_header) hard_header_printed = True self._print_hard_irq_stats_row(stats_row) else: if not soft_header_printed: if hard_header_printed: print() print(soft_header) soft_header_printed = True self._print_soft_irq_stats_row(stats_row) return for freq_table in freq_tables: # frequency table might be empty: do not print if freq_table.rows: print() self._print_frequency_distribution(freq_table) def _add_arguments(self, ap): Command._add_min_max_args(ap) Command._add_freq_args( ap, help='Output the frequency distribution of handler durations') Command._add_log_args( ap, help='Output the IRQs in chronological order') Command._add_stats_args(ap, help='Output IRQ statistics') ap.add_argument('--irq', type=str, default=None, help='Output results only for the list of IRQ') ap.add_argument('--softirq', type=str, default=None, help='Output results only for the list of SoftIRQ') def _run(mi_mode): irqcmd = IrqAnalysisCommand(mi_mode=mi_mode) irqcmd.run() def _runstats(mi_mode): sys.argv.insert(1, '--stats') _run(mi_mode) def _runlog(mi_mode): sys.argv.insert(1, '--log') _run(mi_mode) def _runfreq(mi_mode): sys.argv.insert(1, '--freq') _run(mi_mode) def runstats(): _runstats(mi_mode=False) def runlog(): _runlog(mi_mode=False) def runfreq(): _runfreq(mi_mode=False) def runstats_mi(): _runstats(mi_mode=True) def runlog_mi(): _runlog(mi_mode=True) def runfreq_mi(): _runfreq(mi_mode=True) lttnganalyses-0.6.1/lttnganalyses/cli/command.py0000664000175000017500000011015613033475105023512 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2016 - Philippe Proulx # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import argparse import json import os import re import sys import subprocess import traceback from babeltrace import TraceCollection from . import mi, progressbar, period_parsing from .. import __version__ from ..core import analysis, period as core_period from ..common import ( format_utils, parse_utils, trace_utils, version_utils ) from ..linuxautomaton import automaton class Command: _MI_BASE_TAGS = ['linux-kernel', 'lttng-analyses'] _MI_AUTHORS = [ 'Julien Desfossez', 'Antoine Busque', 'Philippe Proulx', ] _MI_URL = 'https://github.com/lttng/lttng-analyses' _VERSION = version_utils.Version.new_from_string(__version__) _BT_INTERSECT_VERSION = version_utils.Version(1, 4, 0) _DEBUG_ENV_VAR = 'LTTNG_ANALYSES_DEBUG' def __init__(self, mi_mode=False): self._analysis = None self._analysis_conf = None self._args = None self._babeltrace_version = None self._handles = None self._traces = None self._period_ticks = 0 self._mi_mode = mi_mode self._debug_mode = os.environ.get(self._DEBUG_ENV_VAR) self._run_step('create automaton', self._create_automaton) self._run_step('setup MI', self._mi_setup) @property def mi_mode(self): return self._mi_mode def _run_step(self, action_title, fn): try: fn() except KeyboardInterrupt: self._print('Cancelled by user') sys.exit(0) except Exception as e: self._gen_error('Cannot {}: {}'.format(action_title, e)) def run(self): self._run_step('parse arguments', self._parse_args) self._run_step('open trace', self._open_trace) self._run_step('create analysis', self._create_analysis) if not self._mi_mode or not self._args.test_compatibility: self._run_step('run analysis', self._run_analysis) self._run_step('close trace', self._close_trace) def _mi_error(self, msg, code=None): print(json.dumps(mi.get_error(msg, code))) def _non_mi_msg(self, msg, color): if self._args.color: try: import termcolor msg = termcolor.colored(msg, color, attrs=['bold']) except ImportError: pass print(msg, file=sys.stderr) def _non_mi_error(self, msg): self._non_mi_msg(msg, 'red') def _non_mi_warn(self, msg): self._non_mi_msg(msg, 'yellow') def _error(self, msg, exit_code=1): if self._debug_mode: traceback.print_exc() if self._mi_mode: self._mi_error(msg) else: self._non_mi_error(msg) if exit_code is not None: sys.exit(exit_code) def _warn(self, msg): if not self._mi_mode: self._non_mi_warn(msg) def _gen_error(self, msg, exit_code=1): self._error('Error: {}'.format(msg), exit_code) def _cmdline_error(self, msg, exit_code=1): self._error('Command line error: {}'.format(msg), exit_code) def _print(self, msg): if not self._mi_mode: print(msg) def _mi_create_result_table(self, table_class_name, begin, end, subtitle=None): return mi.ResultTable(self._mi_table_classes[table_class_name], begin, end, subtitle) def _mi_setup(self): self._mi_table_classes = {} for tc_tuple in self._MI_TABLE_CLASSES: table_class = mi.TableClass(tc_tuple[0], tc_tuple[1], tc_tuple[2]) self._mi_table_classes[table_class.name] = table_class self._mi_clear_result_tables() def _mi_print_metadata(self): tags = self._MI_BASE_TAGS + self._MI_TAGS infos = mi.get_metadata(version=self._VERSION, title=self._MI_TITLE, description=self._MI_DESCRIPTION, authors=self._MI_AUTHORS, url=self._MI_URL, tags=tags, table_classes=self._mi_table_classes.values()) print(json.dumps(infos)) def _mi_append_result_table(self, result_table): if not result_table or not result_table.rows: return tc_name = result_table.table_class.name self._mi_get_result_tables(tc_name).append(result_table) def _mi_append_result_tables(self, result_tables): if not result_tables: return for result_table in result_tables: self._mi_append_result_table(result_table) def _mi_clear_result_tables(self): self._result_tables = {} def _mi_get_result_tables(self, table_class_name): if table_class_name not in self._result_tables: self._result_tables[table_class_name] = [] return self._result_tables[table_class_name] def _mi_print(self): results = [] for result_tables in self._result_tables.values(): for result_table in result_tables: results.append(result_table.to_native_object()) obj = { 'results': results, } print(json.dumps(obj)) def _create_summary_result_tables(self): pass def _open_trace(self): self._babeltrace_version = trace_utils.read_babeltrace_version() if self._babeltrace_version >= self._BT_INTERSECT_VERSION: traces = TraceCollection(intersect_mode=self._args.intersect_mode) else: if self._args.intersect_mode: self._print('Warning: intersect mode not available - ' 'disabling') self._print(' Use babeltrace {} or later to ' 'enable'.format( trace_utils.BT_INTERSECT_VERSION)) self._args.intersect_mode = False traces = TraceCollection() handles = traces.add_traces_recursive(self._args.path, 'ctf') if handles == {}: self._gen_error('Failed to open ' + self._args.path, -1) self._handles = handles self._traces = traces self._ts_begin = traces.timestamp_begin self._ts_end = traces.timestamp_end self._process_date_args() self._read_tracer_version() if not self._args.skip_validation: self._check_lost_events() if not self._check_period_args(): self._gen_error('Invalid period parameters') def _close_trace(self): for handle in self._handles.values(): self._traces.remove_trace(handle) def _read_tracer_version(self): # TODO: associate the version of the tracer with each trace, not # globally. Waiting for bug #1085 to be fixed in Babeltrace. kernel_path = None # remove the trailing / while self._args.path.endswith('/'): self._args.path = self._args.path[:-1] for root, _, _ in os.walk(self._args.path): if root.endswith('kernel'): kernel_path = root break # If we don't have a kernel folder, we don't need to check the version # of the tracer for now. if kernel_path is None: return try: ret, metadata = subprocess.getstatusoutput( 'babeltrace -o ctf-metadata "%s"' % kernel_path) except subprocess.CalledProcessError: self._gen_error('Cannot run babeltrace on the trace, cannot read' ' tracer version') # fallback to reading the text metadata if babeltrace failed to # output the CTF metadata if ret != 0: try: metadata = subprocess.getoutput( 'cat "%s"' % os.path.join(kernel_path, 'metadata')) except subprocess.CalledProcessError: self._gen_error('Cannot read the metadata of the trace, cannot' 'extract tracer version') major_match = re.search(r'tracer_major = "*(\d+)"*', metadata) minor_match = re.search(r'tracer_minor = "*(\d+)"*', metadata) patch_match = re.search(r'tracer_patchlevel = "*(\d+)"*', metadata) if not major_match or not minor_match or not patch_match: self._gen_error('Malformed metadata, cannot read tracer version') self.state.tracer_version = version_utils.Version( int(major_match.group(1)), int(minor_match.group(1)), int(patch_match.group(1)), ) def _read_babeltrace_version(self): try: output = subprocess.check_output('babeltrace') except subprocess.CalledProcessError: self._gen_error('Could not run babeltrace to verify version') output = output.decode(sys.stdout.encoding) first_line = output.splitlines()[0] version_string = first_line.split()[-1] self._babeltrace_version = \ version_utils.Version.new_from_string(version_string) def _check_lost_events(self): msg = 'Checking the trace for lost events...' self._print(msg) if self._mi_mode and self._args.output_progress: mi.print_progress(0, msg) try: subprocess.check_output('babeltrace "%s"' % self._args.path, shell=True) except subprocess.CalledProcessError: self._gen_error('Cannot run babeltrace on the trace, cannot verify' ' if events were lost during the trace recording') def _pre_analysis(self): pass def _mi_post_analysis(self): if not self._mi_mode: return if self._period_ticks > 1: self._create_summary_result_tables() self._mi_print() def _post_analysis(self): self._mi_post_analysis() def _pb_setup(self): if self._args.no_progress: return ts_end = self._ts_end if self._analysis_conf.end_ts is not None: ts_end = self._analysis_conf.end_ts if self._mi_mode: cls = progressbar.MiProgress else: cls = progressbar.FancyProgressBar self._progress = cls(self._ts_begin, ts_end, self._args.path, self._args.progress_use_size) def _pb_update(self, event): if self._args.no_progress: return self._progress.update(event) def _pb_finish(self): if self._args.no_progress: return self._progress.finalize() def _run_analysis(self): self._pre_analysis() self._pb_setup() if self._args.intersect_mode: if not self._traces.has_intersection: self._gen_error('Trace has no intersection. ' 'Use --no-intersection to override') first_event = True for event in self._traces.events: if first_event is True: self._analysis.begin_analysis(event) first_event = False self._pb_update(event) self._analysis.process_event(event) if self._analysis.ended: break self._automaton.process_event(event) self._pb_finish() self._analysis.end_analysis() self._post_analysis() def _print_date(self, begin_ns, end_ns): time_range_str = format_utils.format_time_range( begin_ns, end_ns, print_date=True, gmt=self._args.gmt ) date = 'Timerange: {}'.format(time_range_str) self._print(date) def _format_timestamp(self, timestamp): return format_utils.format_timestamp( timestamp, print_date=self._args.multi_day, gmt=self._args.gmt ) def _uniform_freq_min(self, category='default'): return self._analysis_conf.uniform_min[category] def _uniform_freq_max(self, category='default'): return self._analysis_conf.uniform_max[category] def _uniform_freq_step(self, category='default'): return self._analysis_conf.uniform_step[category] def _find_uniform_freq_values(self, durations, ratio=1000, category='default'): if category not in self._analysis_conf.uniform_step.keys(): self._analysis_conf.uniform_min[category] = None self._analysis_conf.uniform_max[category] = None self._analysis_conf.uniform_step[category] = None if self._args.min is not None: self._analysis_conf.uniform_min[category] = self._args.min else: if len(durations) == 0: self._analysis_conf.uniform_min[category] = 0 else: if self._analysis_conf.uniform_min[category] is None or \ min(durations) / ratio < \ self._analysis_conf.uniform_min[category]: self._analysis_conf.uniform_min[category] = \ min(durations) / ratio if self._args.max is not None: self._analysis_conf.uniform_max[category] = self._args.max else: if len(durations) == 0: self._analysis_conf.uniform_max[category] = 0 else: if self._analysis_conf.uniform_max[category] is None or \ max(durations) / ratio > \ self._analysis_conf.uniform_max[category]: self._analysis_conf.uniform_max[category] = \ max(durations) / ratio # ns to µs self._analysis_conf.uniform_step[category] = ( (self._analysis_conf.uniform_max[category] - self._analysis_conf.uniform_min[category]) / self._args.freq_resolution) return self._analysis_conf.uniform_min[category], \ self._analysis_conf.uniform_max[category], \ self._analysis_conf.uniform_step[category] def _check_period_args(self): # FIXME return True if len(self._analysis_conf.period_defs) > 0: name = self._analysis_conf.period_begin_ev_name if not trace_utils.check_event_exists(self._handles, name): self._gen_error("Event %s not found in the trace" % name) return False if self._analysis_conf.period_end_ev_name is not None: name = self._analysis_conf.period_end_ev_name if not trace_utils.check_event_exists(self._handles, name): self._gen_error("Event %s not found in the trace" % name) return False ev_name = self._analysis_conf.period_begin_ev_name for field in self._analysis_conf.period_begin_key_fields: if not ev_name: break if not trace_utils.check_field_exists(self._handles, ev_name, field): self._gen_error("Field %s not found in event %s" % (field, ev_name)) return False ev_name = self._analysis_conf.period_end_ev_name for field in self._analysis_conf.period_end_key_fields: if not ev_name: break if not trace_utils.check_field_exists(self._handles, ev_name, field): self._gen_error("Field %s not found in event %s" % (field, ev_name)) return False return True def _validate_transform_period_args(self, analysis_conf): args = self._args # validate period arguments if (args.period_begin is not None or args.period_end is not None or args.period_key_value is not None or args.period_begin_key is not None or args.period_end_key is not None) and (args.period or args.period_captures): self._cmdline_error('Do not use another period option when using ' 'one or more --period or --period-captures ' 'options') registry = self._analysis_conf.period_def_registry name_to_begin_captures_exprs = {} name_to_end_captures_exprs = {} # parse period definition expressions if args.period: # period captures first if args.period_captures: for arg in args.period_captures: try: res = period_parsing.parse_period_captures_arg(arg) except period_parsing.MalformedExpression as e: self._cmdline_error('Malformed period captures ' 'expression: {}'.format(e)) except Exception as e: self._cmdline_error('Cannot parse period captures ' 'expression: {}'.format(e)) if res.name in name_to_begin_captures_exprs: fmt = 'Duplicate period name "{}" in ' \ '--period-captures argument' self._cmdline_error(fmt.format(res.name)) name_to_begin_captures_exprs[res.name] = \ res.begin_captures_exprs name_to_end_captures_exprs[res.name] = \ res.end_captures_exprs for period_arg in args.period: try: res = period_parsing.parse_period_def_arg(period_arg) except period_parsing.MalformedExpression as e: self._cmdline_error('Malformed period definition ' 'expression: {}'.format(e)) except Exception as e: self._cmdline_error('Cannot parse period definition ' 'expression: {}'.format(e)) begin_captures_exprs = {} end_captures_exprs = {} if res.period_name is not None: begin_captures_exprs = name_to_begin_captures_exprs.get( res.period_name, {}) end_captures_exprs = name_to_end_captures_exprs.get( res.period_name, {}) try: registry.add_period_def(res.parent_name, res.period_name, res.begin_expr, res.end_expr, begin_captures_exprs, end_captures_exprs) except core_period.IllegalExpression as e: self._cmdline_error('Illegal period definition ' 'expression: {}'.format(e)) except core_period.InvalidPeriodDefinition as e: self._cmdline_error('Cannot add period: {}'.format(e), None) self._error('Period argument: {}'.format(period_arg)) elif (args.period_begin is not None or args.period_end is not None): self._warn('''Warning: The following period options are deprecated: --period-begin --period-end --period-begin-key --period-end-key --period-key-value Please consider using the --period option.''') # create new-style expression from old-style arguments if args.period_begin is None: # ignore incomplete period definition return if not args.period_begin_key: args.period_begin_key = 'cpu_id' if not args.period_end: args.period_end = args.period_begin if not args.period_end_key: args.period_end_key = args.period_begin_key begin_exprs = [] end_exprs = [] # conditions for matching the event name begin_event_name_expr = core_period.EventScope( core_period.EventName()) begin_event_name_str_expr = core_period.String(args.period_begin) end_event_name_expr = core_period.EventScope( core_period.EventName()) end_event_name_str_expr = core_period.String(args.period_end) begin_event_name_eq_expr = core_period.Eq( begin_event_name_expr, begin_event_name_str_expr) begin_exprs.append(begin_event_name_eq_expr) end_event_name_eq_expr = core_period.Eq(end_event_name_expr, end_event_name_str_expr) end_exprs.append(end_event_name_eq_expr) begin_field_names = args.period_begin_key.split(',') end_field_names = args.period_end_key.split(',') # conditions for begin field values if args.period_key_value: parts = args.period_key_value.split(',') value_exprs = [] for part in parts: try: value_exprs.append(core_period.Number(float(part))) except: value_exprs.append(core_period.String(part)) for field_name, value_expr in zip(begin_field_names, value_exprs): event_field_name = core_period.EventFieldName(field_name) dyn_scope = core_period.DynamicScope( core_period.DynScope.AUTO, event_field_name) evt_scope = core_period.EventScope(dyn_scope) begin_scope = core_period.BeginScope(evt_scope) eq_expr = core_period.Eq(begin_scope, value_expr) begin_exprs.append(eq_expr) # conditions for equal end and begin fields for begin_field_name, end_field_name in zip(begin_field_names, end_field_names): # begin scope event_field_name = core_period.EventFieldName(begin_field_name) dyn_scope = core_period.DynamicScope(core_period.DynScope.AUTO, event_field_name) evt_scope = core_period.EventScope(dyn_scope) begin_scope = core_period.BeginScope(evt_scope) # end event scope event_field_name = core_period.EventFieldName(end_field_name) dyn_scope = core_period.DynamicScope(core_period.DynScope.AUTO, event_field_name) end_evt_scope = core_period.EventScope(dyn_scope) eq_expr = core_period.Eq(end_evt_scope, begin_scope) end_exprs.append(eq_expr) begin_expr = core_period.create_conjunction_from_exprs(begin_exprs) end_expr = core_period.create_conjunction_from_exprs(end_exprs) registry.add_period_def(None, None, begin_expr, end_expr, {}, {}) # check that --period-captures name existing periods for name in name_to_begin_captures_exprs: if not registry.has_period_def(name): fmt = 'Cannot find period named "{}" for --period-captures ' \ 'argument' self._cmdline_error(fmt.format(name)) def _validate_transform_common_args(self): args = self._args refresh_period_ns = None if args.refresh is not None: try: refresh_period_ns = parse_utils.parse_duration(args.refresh) except ValueError as e: self._cmdline_error(str(e)) self._analysis_conf = analysis.AnalysisConfig() self._analysis_conf.refresh_period = refresh_period_ns self._validate_transform_period_args(self._analysis_conf) if args.refresh is not None and not \ self._analysis_conf.period_def_registry.is_empty: self._cmdline_error('Cannot specify --period* and --refresh ' 'arguments at the same time') if args.cpu: self._analysis_conf.cpu_list = args.cpu.split(',') self._analysis_conf.cpu_list = [int(cpu) for cpu in self._analysis_conf.cpu_list] if args.debug: self._debug_mode = True # convert min/max args from µs to ns, if needed if hasattr(args, 'min') and args.min is not None: args.min *= 1000 self._analysis_conf.min_duration = args.min if hasattr(args, 'max') and args.max is not None: args.max *= 1000 self._analysis_conf.max_duration = args.max if hasattr(args, 'procname'): if args.procname: self._analysis_conf.proc_list = args.procname.split(',') if hasattr(args, 'tid'): if args.tid: self._analysis_conf.tid_list = args.tid.split(',') self._analysis_conf.tid_list = [int(tid) for tid in self._analysis_conf.tid_list] if hasattr(args, 'freq'): if args.freq_series: # implies uniform buckets args.freq_uniform = True if hasattr(args, 'freq') and args.freq_uniform: self._analysis_conf.uniform_min = {} self._analysis_conf.uniform_max = {} self._analysis_conf.uniform_step = {} if self._mi_mode: # print MI version if required if args.mi_version: print(mi.get_version_string()) sys.exit(0) # print MI metadata if required if args.metadata: self._mi_print_metadata() sys.exit(0) # validate path argument (required at this point) if not args.path: self._cmdline_error('Please specify a trace path') if type(args.path) is list: args.path = args.path[0] def _validate_transform_args(self): pass def _parse_args(self): ap = argparse.ArgumentParser(description=self._DESC) # common arguments ap.add_argument('-r', '--refresh', type=str, help='Refresh period, with optional units suffix ' '(default units: s)') ap.add_argument('--gmt', action='store_true', help='Manipulate timestamps based on GMT instead ' 'of local time') ap.add_argument('--skip-validation', action='store_true', help='Skip the trace validation') ap.add_argument('--begin', type=str, help='start time: ' 'hh:mm:ss[.nnnnnnnnn]') ap.add_argument('--end', type=str, help='end time: ' 'hh:mm:ss[.nnnnnnnnn]') ap.add_argument('--period', action='append', help='Period definition') ap.add_argument('--period-captures', action='append', help='Period captures definition') ap.add_argument('--period-begin', type=str, help='Analysis period start marker event name') ap.add_argument('--period-end', type=str, help='Analysis period end marker event name ' '(requires --period-begin)') ap.add_argument('--period-begin-key', type=str, help='Optional, list of event field names used to ' 'match period markers (default: cpu_id)') ap.add_argument('--period-end-key', type=str, help='Optional, list of event field names used to ' 'match period marker. If none specified, use the same ' ' --period-begin-key') ap.add_argument('--period-key-value', type=str, help='Optional, define a fixed key value to which a' ' period must correspond to be considered.') ap.add_argument('--cpu', type=str, help='Filter the results only for this list of ' 'CPU IDs') ap.add_argument('--timerange', type=str, help='time range: ' '[begin,end]') ap.add_argument('--progress-use-size', action='store_true', help='use trace size to approximate progress') ap.add_argument('--no-intersection', action='store_false', dest='intersect_mode', help='disable stream intersection mode') ap.add_argument('-V', '--version', action='version', version='LTTng Analyses v{}'.format(self._VERSION)) ap.add_argument('--debug', action='store_true', help='Enable debug mode (or set {} environment ' 'variable)'.format(self._DEBUG_ENV_VAR)) ap.add_argument('--no-color', action='store_false', dest='color', help='Disable colored output') # MI mode-dependent arguments if self._mi_mode: ap.add_argument('--mi-version', action='store_true', help='Print MI version') ap.add_argument('--metadata', action='store_true', help='Print analysis\' metadata') ap.add_argument('--test-compatibility', action='store_true', help='Check if the provided trace is supported ' 'and exit') ap.add_argument('path', metavar='', help='trace path', nargs='*') ap.add_argument('--output-progress', action='store_true', help='Print progress indication lines') else: ap.add_argument('--no-progress', action='store_true', help='Don\'t display the progress bar') ap.add_argument('path', metavar='', help='trace path') # Used to add command-specific args self._add_arguments(ap) self._args = ap.parse_args() if self._mi_mode: # Compatiblity checking does not need to read the whole # trace, the caller should make sure there are no lost # events. At worst, they will be detected when the analysis # is actually run. if self._args.test_compatibility: self._args.skip_validation = True self._args.no_progress = True if self._args.output_progress: self._args.no_progress = False self._validate_transform_common_args() self._validate_transform_args() @staticmethod def _add_proc_filter_args(ap): ap.add_argument('--procname', type=str, help='Filter the results only for this list of ' 'process names') ap.add_argument('--tid', type=str, help='Filter the results only for this list of TIDs') @staticmethod def _add_min_max_args(ap): ap.add_argument('--min', type=float, help='Filter out durations shorter than min usec') ap.add_argument('--max', type=float, help='Filter out durations longer than max usec') @staticmethod def _add_freq_args(ap, help=None): if not help: help = 'Output the frequency distribution' ap.add_argument('--freq', action='store_true', help=help) ap.add_argument('--freq-resolution', type=int, default=20, help='Frequency distribution resolution ' '(default 20)') ap.add_argument('--freq-uniform', action='store_true', help='Use a uniform resolution across distributions') ap.add_argument('--freq-series', action='store_true', help='Consolidate frequency distribution histogram ' 'as a single one') @staticmethod def _add_log_args(ap, help=None): if not help: help = 'Output the events in chronological order' ap.add_argument('--log', action='store_true', help=help) @staticmethod def _add_top_args(ap, help=None): if not help: help = 'Output the top results' ap.add_argument('--limit', type=int, default=10, help='Limit to top X (default = 10)') ap.add_argument('--top', action='store_true', help=help) @staticmethod def _add_stats_args(ap, help=None): if not help: help = 'Output statistics' ap.add_argument('--stats', action='store_true', help=help) def _add_arguments(self, ap): pass def _process_date_args(self): def parse_date(date): try: ts = parse_utils.parse_trace_collection_date( self._traces, date, self._args.gmt, self._handles ) except ValueError as e: self._cmdline_error(str(e)) return ts self._args.multi_day = trace_utils.is_multi_day_trace_collection( self._traces, self._handles) begin_ts = None end_ts = None if self._args.timerange: try: begin_ts, end_ts = ( parse_utils.parse_trace_collection_time_range( self._traces, self._args.timerange, self._args.gmt, self._handles) ) except ValueError as e: self._cmdline_error(str(e)) else: if self._args.begin: begin_ts = parse_date(self._args.begin) if self._args.end: end_ts = parse_date(self._args.end) # We have to check if timestamp_begin is None, which # it always is in older versions of babeltrace. In # that case, the test is simply skipped and an invalid # --end value will cause an empty analysis if self._ts_begin is not None and \ end_ts < self._ts_begin: self._cmdline_error( '--end timestamp before beginning of trace') self._analysis_conf.begin_ts = begin_ts self._analysis_conf.end_ts = end_ts def _create_analysis(self): notification_cbs = { analysis.AnalysisCallbackType.TICK_CB: self._analysis_tick_cb } self._analysis = self._ANALYSIS_CLASS(self.state, self._analysis_conf) self._analysis.register_notification_cbs(notification_cbs) def _create_automaton(self): self._automaton = automaton.Automaton() self.state = self._automaton.state def _analysis_tick_cb(self, period, end_ns): # No event was processed, just exit if end_ns is None: return self._analysis_tick(period, end_ns) if period is not None: # increment the number of effective ticks associated to # an existing period self._period_ticks += 1 def _analysis_tick(self, period, end_ns): raise NotImplementedError() lttnganalyses-0.6.1/lttnganalyses/cli/termgraph.py0000664000175000017500000001464712723101501024064 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from collections import namedtuple GraphDatum = namedtuple('GraphDatum', ['value', 'value_str']) BarGraphDatum = namedtuple('BarGraphDatum', ['value', 'value_str', 'label']) FreqGraphDatum = namedtuple( 'FreqGraphDatum', ['value', 'value_str', 'lower_bound'] ) class Graph(): MAX_GRAPH_WIDTH = 80 BAR_CHAR = '█' HR_CHAR = '#' def __init__(self, data, get_value, get_value_str, title, unit): self._data = data self._get_value = get_value self._title = title self._unit = unit self._max_value = 0 self._max_value_len = 0 if get_value_str is not None: self._get_value_str_cb = get_value_str else: self._get_value_str_cb = Graph._get_value_str_default def _transform_data(self, data): graph_data = [] for datum in data: graph_datum = self._get_graph_datum(datum) if graph_datum.value > self._max_value: self._max_value = graph_datum.value if len(graph_datum.value_str) > self._max_value_len: self._max_value_len = len(graph_datum.value_str) graph_data.append(graph_datum) return graph_data def _get_value_str(self, value): return self._get_value_str_cb(value) def _get_graph_datum(self, datum): value = self._get_value(datum) value_str = self._get_value_str(value) return GraphDatum(value, value_str) def _print_header(self): if self._title: print(self._title) def _print_separator(self): print(self.HR_CHAR * self.MAX_GRAPH_WIDTH) def _print_body(self): raise NotImplementedError() def print_graph(self): if not self._data: return self._print_header() self._print_separator() self._print_body() print() @staticmethod def _get_value_str_default(value): if isinstance(value, float): value_str = '{:0.02f}'.format(value) else: value_str = str(value) return value_str class BarGraph(Graph): def __init__(self, data, get_value, get_label, get_value_str=None, title=None, label_header=None, unit=None): super().__init__(data, get_value, get_value_str, title, unit) self._get_label = get_label self._label_header = label_header self._data = self._transform_data(self._data) def _get_graph_datum(self, datum): value = self._get_value(datum) value_str = self._get_value_str(value) label = self._get_label(datum) return BarGraphDatum(value, value_str, label) def _get_value_str(self, value): value_str = super()._get_value_str(value) if self._unit: value_str += ' ' + self._unit return value_str def _get_graph_header(self): if not self._label_header: return self._title title_len = len(self._title) space_width = ( self.MAX_GRAPH_WIDTH - title_len + 1 + self._max_value_len + 1 ) return self._title + ' ' * space_width + self._label_header def _print_header(self): header = self._get_graph_header() print(header) def _get_bar_str(self, datum): if self._max_value == 0: bar_width = 0 else: bar_width = int(self.MAX_GRAPH_WIDTH * datum.value / self._max_value) space_width = self.MAX_GRAPH_WIDTH - bar_width bar_str = self.BAR_CHAR * bar_width + ' ' * space_width return bar_str def _print_body(self): for datum in self._data: bar_str = self._get_bar_str(datum) value_padding = ' ' * (self._max_value_len - len(datum.value_str)) print(bar_str, value_padding + datum.value_str, datum.label) class FreqGraph(Graph): LOWER_BOUND_WIDTH = 8 def __init__(self, data, get_value, get_lower_bound, get_value_str=None, title=None, unit=None): super().__init__(data, get_value, get_value_str, title, unit) self._get_lower_bound = get_lower_bound self._data = self._transform_data(self._data) def _get_graph_datum(self, datum): value = self._get_value(datum) value_str = self._get_value_str(value) lower_bound = self._get_lower_bound(datum) return FreqGraphDatum(value, value_str, lower_bound) def _print_header(self): header = self._title if self._unit: header += ' ({})'.format(self._unit) print(header) def _get_bar_str(self, datum): max_width = self.MAX_GRAPH_WIDTH - self.LOWER_BOUND_WIDTH if self._max_value == 0: bar_width = 0 else: bar_width = int(max_width * datum.value / self._max_value) space_width = max_width - bar_width bar_str = self.BAR_CHAR * bar_width + ' ' * space_width return bar_str def _print_body(self): for datum in self._data: bound_str = FreqGraph._get_bound_str(datum) bar_str = self._get_bar_str(datum) value_padding = ' ' * (self._max_value_len - len(datum.value_str)) print(bound_str, bar_str, value_padding + datum.value_str) @staticmethod def _get_bound_str(datum): return '{:>7.03f}'.format(datum.lower_bound) lttnganalyses-0.6.1/lttnganalyses/cli/sched.py0000664000175000017500000010003413033475105023154 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import sys import math import operator import statistics import collections from . import mi, termgraph from ..core import sched from .command import Command from ..common import format_utils _SchedStats = collections.namedtuple('_SchedStats', [ 'count', 'min', 'max', 'stdev', 'total', ]) class SchedAnalysisCommand(Command): _DESC = """The sched command.""" _ANALYSIS_CLASS = sched.SchedAnalysis _MI_TITLE = 'Scheduling latencies analysis' _MI_DESCRIPTION = \ 'Scheduling latencies frequency distribution, statistics, top, and log' _MI_TAGS = [mi.Tags.SCHED, mi.Tags.STATS, mi.Tags.FREQ, mi.Tags.TOP, mi.Tags.LOG] _MI_TABLE_CLASS_LOG = 'log' _MI_TABLE_CLASS_TOP = 'top' _MI_TABLE_CLASS_TOTAL_STATS = 'total_stats' _MI_TABLE_CLASS_PER_TID_STATS = 'per_tid_stats' _MI_TABLE_CLASS_PER_PRIO_STATS = 'per_prio_stats' _MI_TABLE_CLASS_FREQ = 'freq' # _MI_TABLE_CLASS_SUMMARY = 'summary' _MI_TABLE_CLASSES = [ ( _MI_TABLE_CLASS_LOG, 'Scheduling log', [ ('wakeup_ts', 'Wakeup timestamp', mi.Timestamp), ('switch_ts', 'Switch timestamp', mi.Timestamp), ('latency', 'Scheduling latency', mi.Duration), ('prio', 'Priority', mi.Number), ('target_cpu', 'Target CPU', mi.Cpu), ('wakee_proc', 'Wakee process', mi.Process), ('waker_proc', 'Waker process', mi.Process), ] ), ( _MI_TABLE_CLASS_TOP, 'Scheduling top', [ ('wakeup_ts', 'Wakeup timestamp', mi.Timestamp), ('switch_ts', 'Switch timestamp', mi.Timestamp), ('latency', 'Scheduling latency', mi.Duration), ('prio', 'Priority', mi.Number), ('target_cpu', 'Target CPU', mi.Cpu), ('wakee_proc', 'Wakee process', mi.Process), ('waker_proc', 'Waker process', mi.Process), ] ), ( _MI_TABLE_CLASS_TOTAL_STATS, 'Scheduling latency stats (total)', [ ('count', 'Scheduling count', mi.Number, 'schedulings'), ('min_latency', 'Minimum latency', mi.Duration), ('avg_latency', 'Average latency', mi.Duration), ('max_latency', 'Maximum latency', mi.Duration), ('stdev_latency', 'Scheduling latency standard deviation', mi.Duration), ] ), ( _MI_TABLE_CLASS_PER_TID_STATS, 'Scheduling latency stats (per-TID)', [ ('process', 'Wakee process', mi.Process), ('count', 'Scheduling count', mi.Number, 'schedulings'), ('min_latency', 'Minimum latency', mi.Duration), ('avg_latency', 'Average latency', mi.Duration), ('max_latency', 'Maximum latency', mi.Duration), ('stdev_latency', 'Scheduling latency standard deviation', mi.Duration), ('prio_list', 'Chronological priorities', mi.String), ] ), ( _MI_TABLE_CLASS_PER_PRIO_STATS, 'Scheduling latency stats (per-prio)', [ ('prio', 'Priority', mi.Number), ('count', 'Scheduling count', mi.Number, 'schedulings'), ('min_latency', 'Minimum latency', mi.Duration), ('avg_latency', 'Average latency', mi.Duration), ('max_latency', 'Maximum latency', mi.Duration), ('stdev_latency', 'Scheduling latency standard deviation', mi.Duration), ] ), ( _MI_TABLE_CLASS_FREQ, 'Scheduling latency frequency distribution', [ ('duration_lower', 'Duration (lower bound)', mi.Duration), ('duration_upper', 'Duration (upper bound)', mi.Duration), ('count', 'Scheduling count', mi.Number, 'schedulings'), ] ), ] def _analysis_tick(self, period_data, end_ns): if period_data is None: return begin_ns = period_data.period.begin_evt.timestamp log_table = None top_table = None total_stats_table = None per_tid_stats_table = None per_prio_stats_table = None total_freq_tables = None per_tid_freq_tables = None per_prio_freq_tables = None if self._args.log: log_table = self._get_log_result_table(period_data, begin_ns, end_ns) if self._args.top: top_table = self._get_top_result_table(period_data, begin_ns, end_ns) if self._args.stats: if self._args.total: total_stats_table = self._get_total_stats_result_table( period_data, begin_ns, end_ns) if self._args.per_tid: per_tid_stats_table = self._get_per_tid_stats_result_table( period_data, begin_ns, end_ns) if self._args.per_prio: per_prio_stats_table = self._get_per_prio_stats_result_table( period_data, begin_ns, end_ns) if self._args.freq: if self._args.total: total_freq_tables = self._get_total_freq_result_tables( period_data, begin_ns, end_ns) if self._args.per_tid: per_tid_freq_tables = self._get_per_tid_freq_result_tables( period_data, begin_ns, end_ns) if self._args.per_prio: per_prio_freq_tables = self._get_per_prio_freq_result_tables( period_data, begin_ns, end_ns) if self._mi_mode: if log_table: self._mi_append_result_table(log_table) if top_table: self._mi_append_result_table(top_table) if total_stats_table and total_stats_table.rows: self._mi_append_result_table(total_stats_table) if per_tid_stats_table and per_tid_stats_table.rows: self._mi_append_result_table(per_tid_stats_table) if per_prio_stats_table and per_prio_stats_table.rows: self._mi_append_result_table(per_prio_stats_table) if self._args.freq: if total_freq_tables: self._mi_append_result_tables(total_freq_tables) if per_tid_freq_tables: if self._args.freq_series: per_tid_freq_tables = [ self._get_per_tid_freq_series_table( per_tid_freq_tables) ] self._mi_append_result_tables(per_tid_freq_tables) if per_prio_freq_tables: if self._args.freq_series: per_prio_freq_tables = [ self._get_per_prio_freq_series_table( per_prio_freq_tables) ] self._mi_append_result_tables(per_prio_freq_tables) else: self._print_date(begin_ns, end_ns) if self._args.stats: if total_stats_table: self._print_total_stats(total_stats_table) if per_tid_stats_table: self._print_per_tid_stats(per_tid_stats_table) if per_prio_stats_table: self._print_per_prio_stats(per_prio_stats_table) if self._args.freq: if total_freq_tables: self._print_freq(total_freq_tables) if per_tid_freq_tables: self._print_freq(per_tid_freq_tables) if per_prio_freq_tables: self._print_freq(per_prio_freq_tables) if log_table: self._print_sched_events(log_table) if top_table: self._print_sched_events(top_table) def _get_total_sched_lists_stats(self, period_data): total_list = period_data.sched_list stdev = self._compute_sched_latency_stdev(total_list) total_stats = _SchedStats( count=self._analysis.count(period_data), min=period_data.min_latency, max=period_data.max_latency, stdev=stdev, total=period_data.total_latency ) return [total_list], total_stats def _get_tid_sched_lists_stats(self, period_data): tid_sched_lists = {} tid_stats = {} for sched_event in period_data.sched_list: tid = sched_event.wakee_proc.tid if tid not in tid_sched_lists: tid_sched_lists[tid] = [] tid_sched_lists[tid].append(sched_event) for tid in tid_sched_lists: sched_list = tid_sched_lists[tid] if not sched_list: continue stdev = self._compute_sched_latency_stdev(sched_list) latencies = [sched.latency for sched in sched_list] count = len(latencies) min_latency = min(latencies) max_latency = max(latencies) total_latency = sum(latencies) tid_stats[tid] = _SchedStats( count=count, min=min_latency, max=max_latency, stdev=stdev, total=total_latency, ) return tid_sched_lists, tid_stats def _get_prio_sched_lists_stats(self, period_data): prio_sched_lists = {} prio_stats = {} for sched_event in period_data.sched_list: if sched_event.prio not in prio_sched_lists: prio_sched_lists[sched_event.prio] = [] prio_sched_lists[sched_event.prio].append(sched_event) for prio in prio_sched_lists: sched_list = prio_sched_lists[prio] if not sched_list: continue stdev = self._compute_sched_latency_stdev(sched_list) latencies = [sched.latency for sched in sched_list] count = len(latencies) min_latency = min(latencies) max_latency = max(latencies) total_latency = sum(latencies) prio_stats[prio] = _SchedStats( count=count, min=min_latency, max=max_latency, stdev=stdev, total=total_latency, ) return prio_sched_lists, prio_stats def _get_log_result_table(self, period_data, begin_ns, end_ns): result_table = self._mi_create_result_table(self._MI_TABLE_CLASS_LOG, begin_ns, end_ns) for sched_event in period_data.sched_list: wakee_proc = mi.Process(sched_event.wakee_proc.comm, sched_event.wakee_proc.pid, sched_event.wakee_proc.tid) if sched_event.waker_proc: waker_proc = mi.Process(sched_event.waker_proc.comm, sched_event.waker_proc.pid, sched_event.waker_proc.tid) else: waker_proc = mi.Empty() result_table.append_row( wakeup_ts=mi.Timestamp(sched_event.wakeup_ts), switch_ts=mi.Timestamp(sched_event.switch_ts), latency=mi.Duration(sched_event.latency), prio=mi.Number(sched_event.prio), target_cpu=mi.Cpu(sched_event.target_cpu), wakee_proc=wakee_proc, waker_proc=waker_proc, ) return result_table def _get_top_result_table(self, period_data, begin_ns, end_ns): result_table = self._mi_create_result_table( self._MI_TABLE_CLASS_TOP, begin_ns, end_ns) top_events = sorted(period_data.sched_list, key=operator.attrgetter('latency'), reverse=True) top_events = top_events[:self._args.limit] for sched_event in top_events: wakee_proc = mi.Process(sched_event.wakee_proc.comm, sched_event.wakee_proc.pid, sched_event.wakee_proc.tid) if sched_event.waker_proc: waker_proc = mi.Process(sched_event.waker_proc.comm, sched_event.waker_proc.pid, sched_event.waker_proc.tid) else: waker_proc = mi.Empty() result_table.append_row( wakeup_ts=mi.Timestamp(sched_event.wakeup_ts), switch_ts=mi.Timestamp(sched_event.switch_ts), latency=mi.Duration(sched_event.latency), prio=mi.Number(sched_event.prio), target_cpu=mi.Cpu(sched_event.target_cpu), wakee_proc=wakee_proc, waker_proc=waker_proc, ) return result_table def _get_total_stats_result_table(self, period_data, begin_ns, end_ns): stats_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_TOTAL_STATS, begin_ns, end_ns) stdev = self._compute_sched_latency_stdev(period_data.sched_list) if math.isnan(stdev): stdev = mi.Unknown() else: stdev = mi.Duration(stdev) count = self._analysis.count(period_data) if count == 0: avg = mi.Duration(0) else: avg = mi.Duration(period_data.total_latency / count) if period_data.min_latency is None: min = mi.Duration(0) else: min = mi.Duration(period_data.min_latency) if period_data.max_latency is None: max = mi.Duration(0) else: max = mi.Duration(period_data.max_latency) stats_table.append_row( count=mi.Number(self._analysis.count(period_data)), min_latency=min, avg_latency=avg, max_latency=max, stdev_latency=stdev, ) return stats_table def _get_per_tid_stats_result_table(self, period_data, begin_ns, end_ns): stats_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_PER_TID_STATS, begin_ns, end_ns) tid_stats_list = sorted(list(period_data.tids.values()), key=lambda proc: proc.comm.lower()) for tid_stats in tid_stats_list: if not tid_stats.sched_list: continue stdev = self._compute_sched_latency_stdev(tid_stats.sched_list) if math.isnan(stdev): stdev = mi.Unknown() else: stdev = mi.Duration(stdev) prio_list = format_utils.format_prio_list(tid_stats.prio_list) stats_table.append_row( process=mi.Process(tid=tid_stats.tid, name=tid_stats.comm), count=mi.Number(tid_stats.count), min_latency=mi.Duration(tid_stats.min_latency), avg_latency=mi.Duration(tid_stats.total_latency / tid_stats.count), max_latency=mi.Duration(tid_stats.max_latency), stdev_latency=stdev, prio_list=mi.String(prio_list), ) return stats_table def _get_per_prio_stats_result_table(self, period_data, begin_ns, end_ns): stats_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_PER_PRIO_STATS, begin_ns, end_ns) _, prio_stats = self._get_prio_sched_lists_stats(period_data) for prio in sorted(prio_stats): stats = prio_stats[prio] stdev = stats.stdev if math.isnan(stdev): stdev = mi.Unknown() else: stdev = mi.Duration(stdev) count = stats.count min_latency = stats.min max_latency = stats.max total_latency = stats.total stats_table.append_row( prio=mi.Number(prio), count=mi.Number(count), min_latency=mi.Duration(min_latency), avg_latency=mi.Duration(total_latency / count), max_latency=mi.Duration(max_latency), stdev_latency=stdev, ) return stats_table def _get_per_tid_freq_series_table(self, freq_tables): if not freq_tables: return column_infos = [ ('duration_lower', 'Duration (lower bound)', mi.Duration), ('duration_upper', 'Duration (upper bound)', mi.Duration), ] for index, freq_table in enumerate(freq_tables): column_infos.append(( 'tid{}'.format(index), freq_table.subtitle, mi.Number, 'schedulings' )) title = 'Scheduling latencies frequency distributions' table_class = mi.TableClass(None, title, column_infos) begin = freq_tables[0].timerange.begin.value end = freq_tables[0].timerange.end.value result_table = mi.ResultTable(table_class, begin, end) for row_index, freq0_row in enumerate(freq_tables[0].rows): row_tuple = [ freq0_row.duration_lower, freq0_row.duration_upper, ] for freq_table in freq_tables: freq_row = freq_table.rows[row_index] row_tuple.append(freq_row.count) result_table.append_row_tuple(tuple(row_tuple)) return result_table def _get_per_prio_freq_series_table(self, freq_tables): if not freq_tables: return column_infos = [ ('duration_lower', 'Duration (lower bound)', mi.Duration), ('duration_upper', 'Duration (upper bound)', mi.Duration), ] for index, freq_table in enumerate(freq_tables): column_infos.append(( 'prio{}'.format(index), freq_table.subtitle, mi.Number, 'schedulings' )) title = 'Scheduling latencies frequency distributions' table_class = mi.TableClass(None, title, column_infos) begin = freq_tables[0].timerange.begin.value end = freq_tables[0].timerange.end.value result_table = mi.ResultTable(table_class, begin, end) for row_index, freq0_row in enumerate(freq_tables[0].rows): row_tuple = [ freq0_row.duration_lower, freq0_row.duration_upper, ] for freq_table in freq_tables: freq_row = freq_table.rows[row_index] row_tuple.append(freq_row.count) result_table.append_row_tuple(tuple(row_tuple)) return result_table def _fill_freq_result_table(self, sched_list, stats, min_duration, max_duration, step, freq_table): # The number of bins for the histogram resolution = self._args.freq_resolution if not self._args.freq_uniform: if self._args.min is not None: min_duration = self._args.min else: min_duration = stats.min if self._args.max is not None: max_duration = self._args.max else: max_duration = stats.max # ns to µs if min_duration is None: min_duration = 0 else: min_duration /= 1000 if max_duration is None: max_duration = 0 else: max_duration /= 1000 step = (max_duration - min_duration) / resolution if step == 0: return buckets = [] counts = [] for i in range(resolution): buckets.append(i * step) counts.append(0) for sched_event in sched_list: duration = sched_event.latency / 1000 index = int((duration - min_duration) / step) if index >= resolution: # special case for max value: put in last bucket (includes # its upper bound) if duration == max_duration: counts[index - 1] += 1 continue counts[index] += 1 for index, count in enumerate(counts): lower_bound = index * step + min_duration upper_bound = (index + 1) * step + min_duration freq_table.append_row( duration_lower=mi.Duration.from_us(lower_bound), duration_upper=mi.Duration.from_us(upper_bound), count=mi.Number(count), ) def _get_total_freq_result_tables(self, period_data, begin_ns, end_ns): freq_tables = [] sched_lists, sched_stats = self._get_total_sched_lists_stats( period_data) min_duration = None max_duration = None step = None if self._args.freq_uniform: latencies = [] for sched_list in sched_lists: latencies += [sched.latency for sched in sched_list] min_duration, max_duration, step = \ self._find_uniform_freq_values(latencies) for sched_list in sched_lists: freq_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin_ns, end_ns) self._fill_freq_result_table(sched_list, sched_stats, min_duration, max_duration, step, freq_table) freq_tables.append(freq_table) return freq_tables def _get_per_tid_freq_result_tables(self, period_data, begin_ns, end_ns): freq_tables = [] tid_sched_lists, tid_stats = self._get_tid_sched_lists_stats( period_data) min_duration = None max_duration = None step = None if self._args.freq_uniform: latencies = [] for sched_list in tid_sched_lists.values(): latencies += [sched.latency for sched in sched_list] min_duration, max_duration, step = \ self._find_uniform_freq_values(latencies) for tid in sorted(tid_sched_lists): sched_list = tid_sched_lists[tid] stats = tid_stats[tid] subtitle = 'TID: {}'.format(tid) freq_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin_ns, end_ns, subtitle) self._fill_freq_result_table(sched_list, stats, min_duration, max_duration, step, freq_table) freq_tables.append(freq_table) return freq_tables def _get_per_prio_freq_result_tables(self, period_data, begin_ns, end_ns): freq_tables = [] prio_sched_lists, prio_stats = self._get_prio_sched_lists_stats( period_data) min_duration = None max_duration = None step = None if self._args.freq_uniform: latencies = [] for sched_list in prio_sched_lists.values(): latencies += [sched.latency for sched in sched_list] min_duration, max_duration, step = \ self._find_uniform_freq_values(latencies) for prio in sorted(prio_sched_lists): sched_list = prio_sched_lists[prio] stats = prio_stats[prio] subtitle = 'Priority: {}'.format(prio) freq_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin_ns, end_ns, subtitle) self._fill_freq_result_table(sched_list, stats, min_duration, max_duration, step, freq_table) freq_tables.append(freq_table) return freq_tables def _compute_sched_latency_stdev(self, sched_events): sched_latencies = [] for sched_event in sched_events: sched_latencies.append(sched_event.latency) if len(sched_latencies) < 2: return float('nan') return statistics.stdev(sched_latencies) def _print_sched_events(self, result_table): fmt = '[{:<18}, {:<18}] {:>15} {:>10} {:>3} {:<25} {:<25}' title_fmt = '{:<20} {:<19} {:>15} {:>10} {:>3} {:<25} {:<25}' print() print(result_table.title) print(title_fmt.format('Wakeup', 'Switch', 'Latency (us)', 'Priority', 'CPU', 'Wakee', 'Waker')) for row in result_table.rows: wakeup_ts = row.wakeup_ts.value switch_ts = row.switch_ts.value latency = row.latency.value prio = row.prio.value target_cpu = row.target_cpu.id wakee_proc = row.wakee_proc waker_proc = row.waker_proc wakee_str = '%s (%d)' % (wakee_proc.name, wakee_proc.tid) if isinstance(waker_proc, mi.Empty): waker_str = 'Unknown (N/A)' else: waker_str = '%s (%d)' % (waker_proc.name, waker_proc.tid) print(fmt.format(self._format_timestamp(wakeup_ts), self._format_timestamp(switch_ts), '%0.03f' % (latency / 1000), prio, target_cpu, wakee_str, waker_str)) def _print_total_stats(self, stats_table): row_format = '{:<12} {:<12} {:<12} {:<12} {:<12}' header = row_format.format( 'Count', 'Min', 'Avg', 'Max', 'Stdev' ) if stats_table.rows: print() print(stats_table.title + ' (us)') print(header) for row in stats_table.rows: if type(row.stdev_latency) is mi.Unknown: stdev_str = '?' else: stdev_str = '%0.03f' % row.stdev_latency.to_us() row_str = row_format.format( '%d' % row.count.value, '%0.03f' % row.min_latency.to_us(), '%0.03f' % row.avg_latency.to_us(), '%0.03f' % row.max_latency.to_us(), '%s' % stdev_str, ) print(row_str) def _print_per_tid_stats(self, stats_table): row_format = '{:<25} {:>8} {:>12} {:>12} {:>12} {:>12} {}' header = row_format.format( 'Process', 'Count', 'Min', 'Avg', 'Max', 'Stdev', 'Priorities' ) if stats_table.rows: print() print(stats_table.title + ' (us)') print(header) for row in stats_table.rows: if type(row.stdev_latency) is mi.Unknown: stdev_str = '?' else: stdev_str = '%0.03f' % row.stdev_latency.to_us() proc = row.process proc_str = '%s (%d)' % (proc.name, proc.tid) row_str = row_format.format( '%s' % proc_str, '%d' % row.count.value, '%0.03f' % row.min_latency.to_us(), '%0.03f' % row.avg_latency.to_us(), '%0.03f' % row.max_latency.to_us(), '%s' % stdev_str, '%s' % row.prio_list.value, ) print(row_str) def _print_per_prio_stats(self, stats_table): row_format = '{:>4} {:>8} {:>12} {:>12} {:>12} {:>12}' header = row_format.format( 'Prio', 'Count', 'Min', 'Avg', 'Max', 'Stdev' ) if stats_table.rows: print() print(stats_table.title + ' (us)') print(header) for row in stats_table.rows: if type(row.stdev_latency) is mi.Unknown: stdev_str = '?' else: stdev_str = '%0.03f' % row.stdev_latency.to_us() row_str = row_format.format( '%d' % row.prio.value, '%d' % row.count.value, '%0.03f' % row.min_latency.to_us(), '%0.03f' % row.avg_latency.to_us(), '%0.03f' % row.max_latency.to_us(), '%s' % stdev_str, ) print(row_str) def _print_frequency_distribution(self, freq_table): title_fmt = 'Scheduling latency frequency distribution - {}' graph = termgraph.FreqGraph( data=freq_table.rows, get_value=lambda row: row.count.value, get_lower_bound=lambda row: row.duration_lower.to_us(), title=title_fmt.format(freq_table.subtitle), unit='µs' ) graph.print_graph() def _print_freq(self, freq_tables): for freq_table in freq_tables: self._print_frequency_distribution(freq_table) def _validate_transform_args(self): args = self._args # If neither --total nor --per-prio are specified, default # to --per-tid if not (args.total or args.per_prio): args.per_tid = True def _add_arguments(self, ap): Command._add_min_max_args(ap) Command._add_proc_filter_args(ap) Command._add_freq_args( ap, help='Output the frequency distribution of sched switch ' 'latencies') Command._add_top_args(ap, help='Output the top sched switch latencies') Command._add_log_args( ap, help='Output the sched switches in chronological order') Command._add_stats_args(ap, help='Output sched switch statistics') ap.add_argument('--total', action='store_true', help='Group all results (applies to stats and freq)') ap.add_argument('--per-tid', action='store_true', help='Group results per-TID (applies to stats and ' 'freq) (default)') ap.add_argument('--per-prio', action='store_true', help='Group results per-prio (applies to stats and ' 'freq)') def _run(mi_mode): schedcmd = SchedAnalysisCommand(mi_mode=mi_mode) schedcmd.run() def _runstats(mi_mode): sys.argv.insert(1, '--stats') _run(mi_mode) def _runlog(mi_mode): sys.argv.insert(1, '--log') _run(mi_mode) def _runtop(mi_mode): sys.argv.insert(1, '--top') _run(mi_mode) def _runfreq(mi_mode): sys.argv.insert(1, '--freq') _run(mi_mode) def runstats(): _runstats(mi_mode=False) def runlog(): _runlog(mi_mode=False) def runtop(): _runtop(mi_mode=False) def runfreq(): _runfreq(mi_mode=False) def runstats_mi(): _runstats(mi_mode=True) def runlog_mi(): _runlog(mi_mode=True) def runtop_mi(): _runtop(mi_mode=True) def runfreq_mi(): _runfreq(mi_mode=True) lttnganalyses-0.6.1/lttnganalyses/cli/mi.py0000664000175000017500000003106012745737273022515 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from collections import namedtuple import sys _MI_VERSION = (1, 0) class Tags: CPU = 'cpu' MEMORY = 'memory' INTERRUPT = 'interrupt' SCHED = 'sched' SYSCALL = 'syscall' IO = 'io' TOP = 'top' STATS = 'stats' FREQ = 'freq' LOG = 'log' PERIOD = 'period' class ColumnDescription: def __init__(self, key, title, do_class, unit=None): self._key = key self._title = title self._do_class = do_class self._unit = unit @property def key(self): return self._key def to_native_object(self): obj = { 'title': self._title, 'class': self._do_class, } if self._unit: obj['unit'] = self._unit return obj class TableClass: def __init__(self, name, title, column_descriptions_tuples=None, inherit=None): if column_descriptions_tuples is None: column_descriptions_tuples = [] self._inherit = inherit self._name = name self._title = title self._column_descriptions = [] for column_descr_tuple in column_descriptions_tuples: key = column_descr_tuple[0] title = column_descr_tuple[1] do_type = column_descr_tuple[2] unit = None if len(column_descr_tuple) > 3: unit = column_descr_tuple[3] column_descr = ColumnDescription(key, title, do_type.CLASS, unit) self._column_descriptions.append(column_descr) @property def name(self): return self._name @property def title(self): return self._title def to_native_object(self): obj = {} column_descrs = self._column_descriptions native_column_descrs = [c.to_native_object() for c in column_descrs] if self._inherit is not None: obj['inherit'] = self._inherit if self._title is not None: obj['title'] = self._title if native_column_descrs: obj['column-descriptions'] = native_column_descrs return obj def get_column_named_tuple(self): keys = [cd.key for cd in self._column_descriptions] return namedtuple('Column', keys) class ResultTable: def __init__(self, table_class, begin, end, subtitle=None): self._table_class = table_class self._column_named_tuple = table_class.get_column_named_tuple() self._subtitle = subtitle self._timerange = TimeRange(begin, end) self._rows = [] @property def table_class(self): return self._table_class @property def timerange(self): return self._timerange @property def title(self): return self._table_class.title @property def subtitle(self): return self._subtitle def append_row(self, **kwargs): row = self._column_named_tuple(**kwargs) self._rows.append(row) def append_row_tuple(self, row_tuple): self._rows.append(row_tuple) @property def rows(self): return self._rows def to_native_object(self): obj = { 'class': self._table_class.name, 'time-range': self._timerange.to_native_object(), } row_objs = [] if self._table_class.name: if self._subtitle is not None: full_title = '{} [{}]'.format(self.title, self._subtitle) table_class = TableClass(None, full_title, inherit=self._table_class.name) self._table_class = table_class if self._table_class.name is None: obj['class'] = self._table_class.to_native_object() for row in self._rows: row_obj = [] for cell in row: row_obj.append(cell.to_native_object()) row_objs.append(row_obj) obj['data'] = row_objs return obj class _DataObject: def to_native_object(self): base = {'class': self.CLASS} base.update(self._to_native_object()) return base def _to_native_object(self): raise NotImplementedError def __eq__(self, other): # ensure we're comparing the same type first if not isinstance(other, self.__class__): return False # call specific equality method return self._eq(other) def _eq(self, other): raise NotImplementedError class Empty(_DataObject): def to_native_object(self): return None def _eq(self, other): return True class Unknown(_DataObject): CLASS = 'unknown' def _to_native_object(self): return {} def _eq(self, other): return True def __str__(self): return '?' class _SimpleValue(_DataObject): def __init__(self, value): self._value = value @property def value(self): return self._value def _to_native_object(self): return {'value': self._value} def __str__(self): return str(self._value) def _eq(self, other): return self.value == other.value class Boolean(_SimpleValue): CLASS = 'bool' NEG_INF = '-inf' POS_INF = '+inf' class Number(_SimpleValue): CLASS = 'number' def __init__(self, value, low=None, high=None): super().__init__(value) self._low = low self._high = high @property def low(self): return self._low @property def high(self): return self._high def _to_native_object(self): obj = {} if self.value is not None: obj['value'] = self.value if self._low is not None: obj['low'] = self._low if self._high is not None: obj['high'] = self._high return obj def _eq(self, other): self_tuple = (self.value, self.low, self.high) other_tuple = (other.value, other.low, other.high) return self_tuple == other_tuple class String(_SimpleValue): CLASS = 'string' class _SimpleName(_DataObject): def __init__(self, name): self._name = name @property def name(self): return self._name def _to_native_object(self): return {'name': self._name} def __str__(self): return self._name def _eq(self, other): return self.name == other.name class Ratio(_SimpleValue): CLASS = 'ratio' @classmethod def from_percentage(cls, value): return cls(value / 100) def to_percentage(self): return self._value * 100 class Timestamp(Number): CLASS = 'timestamp' class Duration(Number): CLASS = 'duration' @classmethod def from_ms(cls, ms): return cls(ms * 1000000) @classmethod def from_us(cls, us): return cls(us * 1000) def to_ms(self): return self._value / 1000000 def to_us(self): return self._value / 1000 class Size(Number): CLASS = 'size' class Bitrate(Number): CLASS = 'bitrate' @classmethod def from_size_duration(cls, size, duration): return cls(size * 8 / duration) class TimeRange(_DataObject): CLASS = 'time-range' def __init__(self, begin, end): self._begin = self._to_timestamp(begin) self._end = self._to_timestamp(end) @staticmethod def _to_timestamp(val): if type(val) is int or type(val) is float: return Timestamp(val) return val @property def begin(self): return self._begin @property def end(self): return self._end def _to_native_object(self): return { 'begin': self._begin.to_native_object(), 'end': self._end.to_native_object() } def _eq(self, other): return (self.begin, self.end) == (other.begin, other.end) class Syscall(_SimpleName): CLASS = 'syscall' class Process(_DataObject): CLASS = 'process' def __init__(self, name=None, pid=None, tid=None): self._name = name self._pid = pid self._tid = tid @property def name(self): return self._name @property def pid(self): return self._pid @property def tid(self): return self._tid def _to_native_object(self): ret_dict = {} if self._name is not None: ret_dict['name'] = self._name if self._pid is not None: ret_dict['pid'] = self._pid if self._tid is not None: ret_dict['tid'] = self._tid return ret_dict def _eq(self, other): self_tuple = (self.name, self.pid, self.tid) other_tuple = (other.name, other.pid, other.tid) return self_tuple == other_tuple class Path(_DataObject): CLASS = 'path' def __init__(self, path): self._path = path @property def path(self): return self._path def _to_native_object(self): return {'path': self._path} def _eq(self, other): return self.path == other.path class Fd(_DataObject): CLASS = 'fd' def __init__(self, fd): self._fd = fd @property def fd(self): return self._fd def _to_native_object(self): return {'fd': self._fd} def _eq(self, other): return self.fd == other.fd class Irq(_DataObject): CLASS = 'irq' def __init__(self, is_hard, nr, name=None): self._is_hard = is_hard self._nr = nr self._name = name @property def is_hard(self): return self._is_hard @property def nr(self): return self._nr @property def name(self): return self._name def _to_native_object(self): obj = {'hard': self._is_hard, 'nr': self._nr} if self._name is not None: obj['name'] = self._name return obj def _eq(self, other): self_tuple = (self.is_hard, self.nr, self.name) other_tuple = (other.is_hard, other.nr, other.name) return self_tuple == other_tuple class Cpu(_DataObject): CLASS = 'cpu' def __init__(self, cpu_id): self._id = cpu_id @property def id(self): return self._id def _to_native_object(self): return {'id': self._id} def _eq(self, other): return self.id == other.id class Disk(_SimpleName): CLASS = 'disk' class Partition(_SimpleName): CLASS = 'part' class NetIf(_SimpleName): CLASS = 'netif' def get_metadata(version, title, description, authors, url, tags, table_classes): t_classes = {t.name: t.to_native_object() for t in table_classes} return { 'mi-version': { 'major': _MI_VERSION[0], 'minor': _MI_VERSION[1], }, 'version': { 'major': version.major, 'minor': version.minor, 'patch': version.patch, 'extra': version.extra }, 'title': title, 'description': description, 'authors': authors, 'url': url, 'tags': tags, 'table-classes': t_classes, } def get_error(message, code=None): error = { 'error-message': message, } if code is not None: error['error-code'] = code return error def get_progress(at=None, msg=None): if at is None: at = '*' add = '' if msg is not None: add = ' {}'.format(msg) return '{}{}'.format(at, add) def get_version_string(): return '{}.{}'.format(_MI_VERSION[0], _MI_VERSION[1]) def print_progress(at=None, msg=None): print(get_progress(at, msg)) sys.stdout.flush() lttnganalyses-0.6.1/lttnganalyses/cli/periods.py0000664000175000017500000030772713033475105023555 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import sys import math import operator import statistics import collections import ast import re from collections import OrderedDict from . import mi, termgraph from ..core import periods from .command import Command class _StatsFreqTables(): def __init__(self): # Stats tables self.per_parent_stats_table = None self.per_parent_count_table = None self.per_parent_pc_table = None self.global_duration_table = None self.global_count_table = None self.global_pc_table = None # Raw values for the frequency distributions # *_values[period][child] = [] self.duration_values = {} self.count_values = {} self.pc_values = {} self.global_duration_values = {} self.global_count_values = {} self.global_pc_values = {} # Freq tables self.per_parent_freq_tables = [] self.per_parent_count_freq_tables = [] self.per_parent_pc_freq_tables = [] self.global_duration_freq_tables = [] self.global_count_freq_tables = [] self.global_pc_freq_tables = [] class _PeriodStats(): def __init__(self, count=0, min=None, max=0, stdev=0, total=0): self.count = count self.min = min self.max = max self.stdev = stdev self.total = total self.count_array = [] self.durations = [] self.min_count = None self.max_count = 0 self.total_count = 0 # Percentage of the parent period time spent self.min_pc = None self.max_pc = 0 self.total_pc = 0 self.pc_array = [] # How many parent periods have us as a child, indexed by # parent period name. self.parent_count = {} def add_count(self, count): if self.min_count is None or count < self.min_count: self.min_count = count if self.max_count < count: self.max_count = count self.total_count += count self.count_array.append(count) def add_duration(self, duration): if self.min is None or duration < self.min: self.min = duration if self.max < duration: self.max = duration self.total += duration self.durations.append(duration) def add_percentage(self, pc): if self.min_pc is None or pc < self.min_pc: self.min_pc = pc if self.max_pc < pc: self.max_pc = pc self.total_pc += pc self.pc_array.append(pc) class _TmpAggregation(): def __init__(self, parent=None): # self._children[name] = [durations] self._children = {} self._parent = parent self.capture_groups = None @property def children(self): return self._children def add_child(self, name, duration): if name not in self._children.keys(): self._children[name] = [] self._children[name].append(duration) parent = self._parent while parent is not None: parent.add_child(name, duration) parent = parent._parent class _AggregatedPeriodStats(): def __init__(self, registry, name): self._reg = registry self._name = name self._children = OrderedDict() self._stats = _PeriodStats() self.nr_periods = 0 self._init_children() def _recurs_find_children(self, period): for child in period.children: self._children[child.name] = _PeriodStats() self._recurs_find_children(child) def _init_children(self): period_def = self._reg.get_period_def(self._name) if period_def is None: return self._recurs_find_children(period_def) def finish_period(self, start_ts, end_ts, child_dict): parent_duration = end_ts - start_ts for child in child_dict.keys(): count = len(child_dict[child]) duration = 0 for period in child_dict[child]: duration += period c = self._children[child] pc = (duration / parent_duration) * 100 c.add_count(count) c.add_duration(duration) c.add_percentage(pc) if self._name not in c.parent_count.keys(): c.parent_count[self._name] = 0 c.parent_count[self._name] += 1 self.nr_periods += 1 class _AggregatedItem(): def __init__(self, event, parent_event, group_by_captures, full_captures): self._event = event self._parent = parent_event self._group_by_captures = group_by_captures self._full_captures = full_captures @property def event(self): return self._event @property def parent_event(self): return self._parent @property def group_by_captures(self): return self._group_by_captures @property def full_captures(self): return self._full_captures class PeriodAnalysisCommand(Command): _DESC = """The periods command.""" _ANALYSIS_CLASS = periods.PeriodAnalysis _MI_TITLE = 'Periods analysis' _MI_DESCRIPTION = \ 'Periods frequency distribution, statistics, top, and log' _MI_TAGS = [mi.Tags.PERIOD, mi.Tags.STATS, mi.Tags.FREQ, mi.Tags.TOP, mi.Tags.LOG] _MI_TABLE_CLASS_LOG = 'log' _MI_TABLE_CLASS_TOP = 'top' _MI_TABLE_CLASS_PER_PERIOD_STATS = 'per_period_stats' _MI_TABLE_CLASS_PER_PARENT_STATS = 'per_parent_stats' _MI_TABLE_CLASS_PER_PARENT_COUNT = 'per_parent_count' _MI_TABLE_CLASS_PER_PARENT_PC = 'per_parent_percentage' _MI_TABLE_CLASS_FREQ_DURATION = 'freq_duration' _MI_TABLE_CLASS_FREQ_COUNT = 'freq_count' _MI_TABLE_CLASS_FREQ_PC = 'freq_ratio' _MI_TABLE_CLASS_HIERARCHICAL_LOG = 'aggregated_log' _MI_TABLE_CLASS_AGGREGATED_TOP = 'aggregated_top' _MI_TABLE_CLASS_AGGREGATED_LOG = 'aggregated_stats' _MI_TABLE_CLASSES = [ ( _MI_TABLE_CLASS_LOG, 'Period log', [ ('begin_ts', 'Period begin timestamp', mi.Timestamp), ('end_ts', 'Period end timestamp', mi.Timestamp), ('duration', 'Period duration', mi.Duration), ('name', 'Period name', mi.String), ('begin_captures', 'Begin captures', mi.String), ('end_captures', 'End captures', mi.String), ] ), ( _MI_TABLE_CLASS_TOP, 'Period top', [ ('begin_ts', 'Period begin timestamp', mi.Timestamp), ('end_ts', 'Period end timestamp', mi.Timestamp), ('duration', 'Period duration', mi.Duration), ('name', 'Period name', mi.String), ('begin_captures', 'Begin captures', mi.String), ('end_captures', 'End captures', mi.String), ] ), ( _MI_TABLE_CLASS_PER_PERIOD_STATS, 'Period statistics', [ ('name', 'Period name', mi.String), ('count', 'Period count', mi.Number, 'occurences'), ('min_duration', 'Minimum duration', mi.Duration), ('avg_duration', 'Average duration', mi.Duration), ('max_duration', 'Maximum duration', mi.Duration), ('stdev_duration', 'Period duration standard deviation', mi.Duration), ('runtime', 'Total runtime', mi.Duration), ] ), ( _MI_TABLE_CLASS_FREQ_DURATION, 'Period duration frequency distribution', [ ('lower', 'Duration (lower bound)', mi.Duration), ('upper', 'Duration (upper bound)', mi.Duration), ('count', 'Period duration', mi.Number, 'us'), ] ), ( _MI_TABLE_CLASS_FREQ_COUNT, 'Period count frequency distribution', [ ('lower', 'Count (lower bound)', mi.Number), ('upper', 'Count (upper bound)', mi.Number), ('count', 'Period count', mi.Number, 'count'), ] ), ( _MI_TABLE_CLASS_FREQ_PC, 'Period usage ratio frequency distribution', [ ('lower', 'Ratio (lower bound)', mi.Number), ('upper', 'Ratio (upper bound)', mi.Number), ('count', 'Period usage ratio', mi.Number, '%'), ] ), ( _MI_TABLE_CLASS_HIERARCHICAL_LOG, 'Hierarchical period log', [ ('parent_begin_ts', 'Parent begin timestamp', mi.Timestamp), ('parent_end_ts', 'Parent end timestamp', mi.Timestamp), ('parent_name', 'Parent period name', mi.String), ('child_begin_ts', 'Child begin timestamp', mi.Timestamp), ('child_end_ts', 'Child end timestamp', mi.Timestamp), ('child_name', 'Child period name', mi.String), ('child_duration', 'Child period duration', mi.Duration), ('parent_duration', 'Parent period duration', mi.Duration), ('captures', 'Captures', mi.String), ] ), ( _MI_TABLE_CLASS_AGGREGATED_TOP, 'Aggregated period top', [ ('parent_begin_ts', 'Parent begin timestamp', mi.Timestamp), ('parent_end_ts', 'Parent end timestamp', mi.Timestamp), ('parent_name', 'Parent period name', mi.String), ('child_begin_ts', 'Child begin timestamp', mi.Timestamp), ('child_end_ts', 'Child end timestamp', mi.Timestamp), ('child_name', 'Child period name', mi.String), ('child_duration', 'Child period duration', mi.Duration), ('parent_duration', 'Parent period duration', mi.Duration), ('captures', 'Captures', mi.String), ] ), ( _MI_TABLE_CLASS_AGGREGATED_LOG, 'Aggregated log', [ ('parent_name', 'Parent period name', mi.String), ('parent_begin_ts', 'Parent begin timestamp', mi.Timestamp), ('parent_end_ts', 'Parent end timestamp', mi.Timestamp), ('child_name', 'Child period name', mi.String), ('count', 'Period count', mi.Number, 'occurences'), ('min_duration', 'Minimum duration', mi.Duration), ('avg_duration', 'Average duration', mi.Duration), ('max_duration', 'Maximum duration', mi.Duration), ('stdev_duration', 'Period duration standard deviation', mi.Duration), ('runtime', 'Total runtime', mi.Duration), ('parent_captures', 'Parent captures', mi.String), ] ), ( _MI_TABLE_CLASS_PER_PARENT_STATS, 'Per-parent period duration statistics', [ ('name', 'Period name', mi.String), ('parent', 'Parent', mi.String), ('min_duration', 'Minimum duration', mi.Duration), ('avg_duration', 'Average duration', mi.Duration), ('max_duration', 'Maximum duration', mi.Duration), ('stdev_duration', 'Period duration standard deviation', mi.Duration), ] ), ( _MI_TABLE_CLASS_PER_PARENT_COUNT, 'Per-parent period count statistics', [ ('name', 'Period name', mi.String), ('parent', 'Parent', mi.String), ('min', 'Minimum', mi.Number, 'occurences'), ('avg', 'Average', mi.Number, 'occurences'), ('max', 'Maximum', mi.Number, 'occurences'), ('stdev', 'Standard deviation', mi.Duration), ] ), ( _MI_TABLE_CLASS_PER_PARENT_PC, 'Per-parent duration ratio', [ ('name', 'Period name', mi.String), ('parent', 'Parent', mi.String), ('min', 'Minimum', mi.Number, 'occurences'), ('avg', 'Average', mi.Number, 'occurences'), ('max', 'Maximum', mi.Number, 'occurences'), ('stdev', 'Standard deviation', mi.Duration), ] ), ] def _filter_duration(self, duration): if self._args.min_duration is not None and \ duration < (self._args.min_duration * 1000): return False if self._args.max_duration is not None and \ duration > (self._args.max_duration * 1000): return False return True def _filter_event_duration(self, period_event): return self._filter_duration(period_event.duration) def _get_period_tree(self, period, period_tree): period_tree[period.name] = OrderedDict() for child in period.children: self._get_period_tree(child, period_tree[period.name]) def _analysis_tick(self, period_data, end_ns): # We only output something at the end of the analysis # not when each period finishes if period_data is not None: return # Override the timestamps since we are only interested in the # whole analysis timestamps, not the ones from the last period. begin_ns = self._analysis.first_event_ts end_ns = self._analysis.last_event_ts log_table = None top_table = None per_period_stats_table = None per_period_freq_tables = None aggregated_groups = None hierarchical_list = None aggregated_log_tables = None per_parent_aggregated_dict = None per_parent_stats_freq_group_by_tables = OrderedDict() per_period_stats_group_by_tables = OrderedDict() per_period_freq_group_by_tables = OrderedDict() # for freq-series: # freq_tables_group_per_period_names[group][period] = freq_table freq_tables_group_per_period_names = OrderedDict() # First pass to find the uniform values if needed freq_min = freq_max = freq_step = None period_tree = OrderedDict() reg = self._analysis_conf.period_def_registry for parent in reg.root_period_defs: self._get_period_tree(parent, period_tree) if self._args.select or self._args.order_by == "hierarchy" or \ self._args.stats or self._args.freq: per_parent_aggregated_dict, hierarchical_list, per_period_stats, \ per_parent_period_group_by_stats, \ per_period_group_by_stats = self._get_aggregated_lists() if self._args.group_by: aggregated_groups = self._get_aggregated_groups( per_parent_aggregated_dict) if self._args.log: # hierarchical view if self._args.order_by == "hierarchy": log_table = self._get_log_result_table( begin_ns, end_ns, hierarchical_list) # aggregated view elif self._args.select: aggregated_log_tables = \ self._get_aggregated_log_table( begin_ns, end_ns, per_parent_aggregated_dict, aggregated_groups, top=True) else: # time-based view log_table = self._get_log_result_table( begin_ns, end_ns, self._analysis.all_period_list) if self._args.top: top_table = self._get_top_result_table( begin_ns, end_ns, self._analysis.all_period_list) # Common tables for stats and freq if self._args.stats or self._args.freq: per_period_stats_table = \ self._get_per_period_stats_result_table(begin_ns, end_ns, period_tree) per_parent_stats_freq_tables = \ self._get_per_parent_stats_result_table( begin_ns, end_ns, per_period_stats, '', per_period_stats) if self._args.freq_uniform: for group in per_parent_period_group_by_stats.keys(): freq_min, freq_max, freq_step = \ self._find_filtered_uniform_freq_values( per_period_group_by_stats[group]) for group in per_parent_period_group_by_stats.keys(): per_period_stats_group_by_tables[group], \ per_period_freq_group_by_tables[group], \ freq_tables_group_per_period_names[group] = \ self._get_grouped_by_period_stats_freq( begin_ns, end_ns, per_period_group_by_stats[group], "'%s' - " % group, freq_min, freq_max, freq_step) # One _StatsFreqTables per group per_parent_stats_freq_group_by_tables[group] = \ self._get_per_parent_stats_result_table( begin_ns, end_ns, per_parent_period_group_by_stats[group], "'%s' - " % group, per_period_stats) if self._args.freq: per_period_freq_tables = \ self._get_per_period_freq_result_tables(begin_ns, end_ns) # This updates per_parent_stats_freq_tables with the new tables, # nothing to return. self._get_per_parent_freq_result_table( begin_ns, end_ns, per_parent_stats_freq_tables) for group in per_parent_period_group_by_stats.keys(): self._get_per_parent_freq_result_table( begin_ns, end_ns, per_parent_stats_freq_group_by_tables[group], "'%s' -- " % group) if self._mi_mode: if log_table: self._mi_append_result_table(log_table) if top_table: self._mi_append_result_table(top_table) if self._args.stats: self._mi_append_result_table(per_period_stats_table) self._mi_append_result_table( per_parent_stats_freq_tables.per_parent_stats_table) self._mi_append_result_table( per_parent_stats_freq_tables.per_parent_count_table) self._mi_append_result_table( per_parent_stats_freq_tables.per_parent_pc_table) self._mi_append_result_table( per_parent_stats_freq_tables.global_duration_table) self._mi_append_result_table( per_parent_stats_freq_tables.global_count_table) self._mi_append_result_table( per_parent_stats_freq_tables.global_pc_table) for group in per_parent_period_group_by_stats.keys(): self._mi_append_result_table( per_parent_stats_freq_group_by_tables[group]. per_parent_stats_table) self._mi_append_result_table( per_parent_stats_freq_group_by_tables[group]. per_parent_count_table) self._mi_append_result_table( per_parent_stats_freq_group_by_tables[group]. per_parent_pc_table) self._mi_append_result_table( per_parent_stats_freq_group_by_tables[group]. global_duration_table) self._mi_append_result_table( per_parent_stats_freq_group_by_tables[group]. global_count_table) self._mi_append_result_table( per_parent_stats_freq_group_by_tables[group]. global_pc_table) if self._args.freq: self._mi_append_result_tables(per_period_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_tables.per_parent_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_tables.per_parent_count_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_tables.per_parent_pc_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_tables.global_duration_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_tables.global_count_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_tables.global_pc_freq_tables) for group in per_parent_period_group_by_stats.keys(): self._mi_append_result_tables( per_period_freq_group_by_tables[group]) self._mi_append_result_tables( per_parent_stats_freq_group_by_tables[group]. per_parent_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_group_by_tables[group]. per_parent_count_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_group_by_tables[group]. per_parent_pc_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_group_by_tables[group]. global_duration_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_group_by_tables[group]. global_count_freq_tables) self._mi_append_result_tables( per_parent_stats_freq_group_by_tables[group]. global_pc_freq_tables) if self._args.freq_series: per_period_tables_group_series = \ self._get_per_group_freq_series_tables( begin_ns, end_ns, per_period_freq_group_by_tables, freq_tables_group_per_period_names) for period in per_period_tables_group_series.keys(): self._mi_append_result_tables( [per_period_tables_group_series[period]]) else: self._print_date(begin_ns, end_ns) if self._args.stats: self._print_per_period_stats(per_period_stats_table, period_tree) self._print_per_parent_stats( per_parent_stats_freq_tables.per_parent_stats_table) self._print_per_parent_pc( per_parent_stats_freq_tables.per_parent_pc_table) self._print_per_parent_count( per_parent_stats_freq_tables.per_parent_count_table) self._print_per_parent_stats( per_parent_stats_freq_tables.global_duration_table) self._print_per_parent_pc( per_parent_stats_freq_tables.global_pc_table) self._print_per_parent_count( per_parent_stats_freq_tables.global_count_table) for group in per_parent_period_group_by_stats.keys(): print("\n\n### Group: %s ###" % group) self._print_per_period_stats( per_period_stats_group_by_tables[group], period_tree) self._print_per_parent_stats( per_parent_stats_freq_group_by_tables[group]. per_parent_stats_table) self._print_per_parent_pc( per_parent_stats_freq_group_by_tables[group]. per_parent_pc_table) self._print_per_parent_count( per_parent_stats_freq_group_by_tables[group]. per_parent_count_table) self._print_per_parent_stats( per_parent_stats_freq_group_by_tables[group]. global_duration_table) self._print_per_parent_pc( per_parent_stats_freq_group_by_tables[group]. global_pc_table) self._print_per_parent_count( per_parent_stats_freq_group_by_tables[group]. global_count_table) if self._args.freq: self._print_freq(per_period_freq_tables, 'us') self._print_freq( per_parent_stats_freq_tables.per_parent_freq_tables, 'us') self._print_freq( per_parent_stats_freq_tables.per_parent_pc_freq_tables, '%') self._print_freq( per_parent_stats_freq_tables.per_parent_count_freq_tables, 'instances') self._print_freq( per_parent_stats_freq_tables.global_duration_freq_tables, 'us') self._print_freq( per_parent_stats_freq_tables.global_pc_freq_tables, '%') self._print_freq( per_parent_stats_freq_tables.global_count_freq_tables, 'instances') for group in per_parent_period_group_by_stats.keys(): print("\n\n### Group: %s ###" % group) self._print_freq(per_period_freq_group_by_tables[group], 'us') self._print_freq( per_parent_stats_freq_group_by_tables[group]. per_parent_freq_tables, 'us') self._print_freq( per_parent_stats_freq_group_by_tables[group]. per_parent_pc_freq_tables, '%') self._print_freq( per_parent_stats_freq_group_by_tables[group]. per_parent_count_freq_tables, 'instances') self._print_freq( per_parent_stats_freq_group_by_tables[group]. global_duration_freq_tables, 'us') self._print_freq( per_parent_stats_freq_group_by_tables[group]. global_pc_freq_tables, '%') self._print_freq( per_parent_stats_freq_group_by_tables[group]. global_count_freq_tables, 'instances') if log_table: self._print_period_events(log_table) if top_table: self._print_period_events(top_table) if aggregated_log_tables: self._print_aggregated_log(aggregated_log_tables) def _get_filtered_min_max_count_avg_total_values(self, durations): min = None max = None count = 0 avg = 0 total = 0 filter_list = [] for d in durations: if not self._filter_duration(d): continue if min is None or min > d: min = d if max is None or max < d: max = d count += 1 total += d filter_list.append(d) if count > 0: avg = total / count else: avg = 0 return min, max, count, avg, total, filter_list def _get_filtered_min_max_count_avg_total_flist(self, period_list): min = None max = None count = 0 avg = 0 total = 0 filter_list = [] for period_event in period_list: if not self._filter_event_duration(period_event): continue if min is None or min > period_event.duration: min = period_event.duration if max is None or max < period_event.duration: max = period_event.duration count += 1 total += period_event.duration filter_list.append(period_event) if count > 0: avg = total / count else: avg = 0 return min, max, count, avg, total, filter_list def _get_agg_filtered_min_max_count_avg_total_flist(self, ag_list): min = None max = None count = 0 avg = 0 total = 0 filter_list = [] for ag_event in ag_list: period_event = ag_event.event if not self._filter_event_duration(period_event): continue if min is None or min > period_event.duration: min = period_event.duration if max is None or max < period_event.duration: max = period_event.duration count += 1 total += period_event.duration filter_list.append(period_event) if count > 0: avg = total / count else: avg = 0 return min, max, count, avg, total, filter_list def _find_aggregated_subperiods(self, root, event, aggregated_list, group_by_captures, full_captures): if len(self._analysis_conf._select) == 0 or \ event.name in self._analysis_conf._select: aggregated_list.append(_AggregatedItem(event, root, group_by_captures, full_captures)) for capture in event.filtered_captures( self._analysis_conf._group_by): group_by_captures.append(capture) for capture in event.full_captures(): full_captures.append(capture) for child in event.children: self._find_aggregated_subperiods(root, child, aggregated_list, group_by_captures, full_captures) def _add_parent_per_group_active_periods(self, event, per_group_active_periods, group_key): p = None if event.parent is not None and \ event.parent not in per_group_active_periods[group_key].keys(): p = self._add_parent_per_group_active_periods( event.parent, per_group_active_periods, group_key) per_group_active_periods[group_key][event] = _TmpAggregation(p) return per_group_active_periods[group_key][event] def _account_parents_in_group(self, event, full_captures, per_parent_period_group_by_stats, per_group_active_periods, per_period_group_by_stats): for g in full_captures: group_key = '' if len(g) < len(self._analysis_conf._group_by.keys()): continue for group in sorted(g, key=lambda x: x[0]): if len(group_key) == 0: group_key = '%s = %s' % (group[0], group[1]) else: group_key = '%s, %s = %s' % (group_key, group[0], group[1]) if len(group_key) == 0: continue if group_key not in per_group_active_periods.keys(): per_group_active_periods[group_key] = OrderedDict() # Statistics for this event alone in this group if group_key not in per_period_group_by_stats.keys(): per_period_group_by_stats[group_key] = OrderedDict() if event.name not in per_period_group_by_stats[group_key].keys(): per_period_group_by_stats[group_key][event.name] = \ _PeriodStats() per_period_group_by_stats[group_key][event.name].add_duration( event.duration) if group_key not in per_parent_period_group_by_stats.keys(): per_parent_period_group_by_stats[group_key] = OrderedDict() if event.name not in \ per_parent_period_group_by_stats[group_key].keys(): per_parent_period_group_by_stats[group_key][event.name] = \ _AggregatedPeriodStats( self._analysis_conf.period_def_registry, event.name) # Account all parent periods of this event in all of its groups _parent = event.parent _child = event while _parent is not None: if _parent not in per_group_active_periods[group_key].keys(): self._add_parent_per_group_active_periods( _parent, per_group_active_periods, group_key) if _parent.name not in \ per_parent_period_group_by_stats[group_key].keys(): per_parent_period_group_by_stats[group_key][_parent.name] \ = _AggregatedPeriodStats( self._analysis_conf.period_def_registry, _parent.name) per_group_active_periods[group_key][_parent].add_child( _child.name, _child.duration) _parent = _parent.parent if event in per_group_active_periods[group_key].keys(): per_parent_period_group_by_stats[group_key][event.name]. \ finish_period( event.start_ts, event.end_ts, per_group_active_periods[group_key][event].children) def _hierarchical_sub(self, tmp_hierarchical_list, event, per_period_stats, per_parent_period_group_by_stats, active_periods, ancestors_captures, per_group_active_periods, per_period_group_by_stats): tmp_hierarchical_list.append(event) event_captures = event.filtered_captures(self._analysis_conf._group_by) # print(parent_captures, event_captures) # Our local level capture to return to our parent combined with the # captures of our children. local_captures = [] global_captures = [] # Recursively iterate over all the children of this period for child in event.children: if not self._filter_event_duration(child): continue if child.name not in per_period_stats.keys(): per_period_stats[child.name] = _AggregatedPeriodStats( self._analysis_conf.period_def_registry, child.name) active_periods[event].add_child(child.name, child.duration) active_periods[child] = _TmpAggregation(active_periods[event]) child_captures = self._hierarchical_sub( tmp_hierarchical_list, child, per_period_stats, per_parent_period_group_by_stats, active_periods, ancestors_captures + event_captures, per_group_active_periods, per_period_group_by_stats) del(active_periods[child]) for c in child_captures: local_captures.append(event_captures + c) global_captures.append(event_captures.copy() + c) if len(local_captures) == 0: local_captures = [event_captures] global_captures = [event_captures.copy()] full_captures = [] for c in global_captures: tmp_c = c.copy() for d in ancestors_captures: tmp_c.append(d) # dedup if tmp_c not in full_captures: full_captures.append(tmp_c) active_periods[event].capture_groups = full_captures self._account_parents_in_group(event, full_captures, per_parent_period_group_by_stats, per_group_active_periods, per_period_group_by_stats) per_period_stats[event.name].finish_period( event.start_ts, event.end_ts, active_periods[event].children) return local_captures def _get_aggregated_lists(self): # Dict with parent period as key. Each entry contains a dict # of all child period that each contain a list of _AggregatedItem. # parent_aggregated_dict[parent_period][child_period] = [] parent_aggregated_dict = {} # List of PeriodEvent ordered in hierarchy (parents are followed # by their children) hierarchical_list = [] # dict of _AggregatedPeriodStats # OrderedDict because we want the same order as the period_tree per_period_stats = OrderedDict() per_parent_period_group_by_stats = OrderedDict() # Just the stats for the period per group (not relative to # its parents) per_period_group_by_stats = OrderedDict() # active_periods[period_event] = _TmpAggregation() active_periods = {} per_group_active_periods = {} for period_event in self._analysis.all_period_list: if not self._filter_event_duration(period_event): continue if self._analysis_conf._order_by == "hierarchy" or \ self._args.stats or self._args.freq: # Only top-level events to start the recursive iteration # and extract per_parent stats/freq and hierarchical list # of periods if period_event.parent is None: active_periods[period_event] = _TmpAggregation() if period_event.name not in per_period_stats.keys(): per_period_stats[period_event.name] = \ _AggregatedPeriodStats( self._analysis_conf.period_def_registry, period_event.name) tmp_hierarchical_list = [] self._hierarchical_sub( tmp_hierarchical_list, period_event, per_period_stats, per_parent_period_group_by_stats, active_periods, [], per_group_active_periods, per_period_group_by_stats) del(active_periods[period_event]) for item in tmp_hierarchical_list: hierarchical_list.append(item) if period_event.name != self._analysis_conf._aggregate_by: continue if period_event not in parent_aggregated_dict.keys(): parent_aggregated_dict[period_event] = {} # Associate the periods with their full capture list (each period # sees its own capture and the capture of all its children) tmp_list = [] for child in period_event.children: if not self._filter_event_duration(child): continue self._find_aggregated_subperiods( period_event, child, tmp_list, period_event.filtered_captures( self._analysis_conf._group_by), period_event.full_captures()) for item in tmp_list: if item.event.name not in \ parent_aggregated_dict[period_event].keys(): parent_aggregated_dict[period_event][item.event.name] = [] parent_aggregated_dict[period_event][item.event.name]. \ append(item) ordered_parent = collections.OrderedDict( sorted(parent_aggregated_dict.items(), key=lambda t: t[0].start_ts)) return ordered_parent, hierarchical_list, per_period_stats, \ per_parent_period_group_by_stats, per_period_group_by_stats def _get_aggregated_groups(self, per_parent_aggregated_dict): # Group and flatten event list by captured keys, aggregate by parent # groups[group_key][parent][child] = [_AggregatedItem, ...] groups = {} for parent in per_parent_aggregated_dict.keys(): for child in per_parent_aggregated_dict[parent].keys(): for ag_event in per_parent_aggregated_dict[parent][child]: group_key = "" for group in sorted(ag_event.group_by_captures, key=lambda x: x[0]): if len(group_key) == 0: group_key = "%s = %s" % (group[0], group[1]) else: group_key = "%s, %s = %s" % (group_key, group[0], group[1]) if group_key not in groups.keys(): groups[group_key] = {} if parent not in groups[group_key].keys(): groups[group_key][parent] = {} if child not in groups[group_key][parent].keys(): groups[group_key][parent][child] = [] groups[group_key][parent][child].append(ag_event) return groups def _get_total_period_lists_stats(self): if self._args.min_duration is None and \ self._args.max_duration is None: total_list = self._analysis.all_period_list stdev = self._compute_period_duration_stdev(total_list) total_stats = _PeriodStats( count=self._analysis.all_count, min=self._analysis.all_min_duration, max=self._analysis.all_max_duration, stdev=stdev, total=self._analysis.all_total_duration ) else: min, max, count, avg, total, total_list = \ self._get_filtered_min_max_count_avg_total_flist( self._analysis.all_period_list) total_stats = _PeriodStats( count=count, min=min, max=max, stdev=self._compute_period_duration_stdev(total_list), total=total, ) return [total_list], total_stats def _get_one_hierarchical_log_table(self, begin_ns, end_ns, aggregated_list, sub, top): if top: table = self._mi_create_result_table( self._MI_TABLE_CLASS_AGGREGATED_TOP, begin_ns, end_ns, subtitle=sub) top_events = sorted(aggregated_list, key=operator.attrgetter( 'event.duration'), reverse=True) top_events = top_events[:self._args.limit] for ag_event in top_events: table.append_row( parent_begin_ts=mi.Timestamp( ag_event.parent_event.start_ts), parent_end_ts=mi.Timestamp( ag_event.parent_event.end_ts), parent_name=mi.String(ag_event.parent_event.name), child_begin_ts=mi.Timestamp(ag_event.event.start_ts), child_end_ts=mi.Timestamp(ag_event.event.end_ts), child_name=mi.String(ag_event.event.name), child_duration=mi.Duration(ag_event.event.duration), parent_duration=mi.Duration( ag_event.parent_event.duration), captures=mi.String(str(ag_event.full_captures)), ) else: table = self._mi_create_result_table( self._MI_TABLE_CLASS_HIERARCHICAL_LOG, begin_ns, end_ns, subtitle=sub) for ag_event in aggregated_list: table.append_row( parent_begin_ts=mi.Timestamp( ag_event.parent_event.start_ts), parent_end_ts=mi.Timestamp(ag_event.parent_event.end_ts), parent_name=mi.String(ag_event.parent_event.name), child_begin_ts=mi.Timestamp(ag_event.event.start_ts), child_end_ts=mi.Timestamp(ag_event.event.end_ts), child_name=mi.String(ag_event.event.name), child_duration=mi.Duration(ag_event.event.duration), parent_duration=mi.Duration( ag_event.parent_event.duration), captures=mi.String(str(ag_event.full_captures)), ) return table def _get_hierarchical_log_top_result_table( self, begin_ns, end_ns, aggregated_list, aggregated_groups, top=False): result_tables = [] ag_list = "" for i in self._analysis_conf._select: if len(ag_list) == 0: ag_list = i else: ag_list = "%s, %s" % (ag_list, i) sub = "Aggregation of (%s) by %s" % ( ag_list, self._analysis_conf._aggregate_by) if aggregated_groups is None: table = self._get_one_hierarchical_log_table(begin_ns, end_ns, aggregated_list, sub, top) result_tables.append(table) else: for group in aggregated_groups.keys(): group_sub = "%s, group: %s" % (sub, group) result_tables.append(self._get_one_hierarchical_log_table( begin_ns, end_ns, aggregated_groups[group], group_sub, top)) return result_tables def _get_full_period_path(self, period_name): if len(period_name) == 0: return period_name return self._analysis_conf.period_def_registry.period_full_path( period_name) def _get_log_result_table(self, begin_ns, end_ns, period_list): result_table = self._mi_create_result_table(self._MI_TABLE_CLASS_LOG, begin_ns, end_ns) for period_event in period_list: if not self._filter_event_duration(period_event): continue result_table.append_row( begin_ts=mi.Timestamp(period_event.start_ts), end_ts=mi.Timestamp(period_event.end_ts), duration=mi.Duration(period_event.duration), name=mi.String(self._get_full_period_path(period_event.name)), begin_captures=mi.String(period_event.begin_captures), end_captures=mi.String(period_event.end_captures), ) return result_table def _get_top_result_table(self, begin_ns, end_ns, event_list): result_table = self._mi_create_result_table( self._MI_TABLE_CLASS_TOP, begin_ns, end_ns) top_events = sorted(event_list, key=operator.attrgetter('duration'), reverse=True) count = 0 for period_event in top_events: if not self._filter_event_duration(period_event): continue if self._args.select and period_event.name not in \ self._args.select: continue result_table.append_row( begin_ts=mi.Timestamp(period_event.start_ts), end_ts=mi.Timestamp(period_event.end_ts), duration=mi.Duration(period_event.duration), name=mi.String(period_event.name), begin_captures=mi.String(period_event.begin_captures), end_captures=mi.String(period_event.end_captures), ) count += 1 if count == self._args.limit: break return result_table def _get_ordered_period_stats_list(self, parent_name, period_stats_list, period_tree): if parent_name not in self._analysis.all_period_stats.keys(): return period_stats_list.append(self._analysis.all_period_stats[parent_name]) for child in sorted(period_tree.keys()): self._get_ordered_period_stats_list(child, period_stats_list, period_tree[child]) def _get_per_parent_stats_result_table(self, begin_ns, end_ns, per_period_stats, group_prefix, not_grouped_per_period_stats): duration_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PARENT_STATS, begin_ns, end_ns, subtitle="%sWith active children" % group_prefix) count_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PARENT_COUNT, begin_ns, end_ns, subtitle="%sWith active children" % group_prefix) global_duration_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PARENT_STATS, begin_ns, end_ns, subtitle="%sGlobally" % group_prefix) global_count_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PARENT_COUNT, begin_ns, end_ns, subtitle="%sGlobally" % group_prefix) pc_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PARENT_PC, begin_ns, end_ns, subtitle="%sWith active children" % group_prefix) global_pc_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PARENT_PC, begin_ns, end_ns, subtitle="%sGlobally" % group_prefix) ret = _StatsFreqTables() ret.per_parent_stats_table = duration_table ret.per_parent_count_table = count_table ret.global_duration_table = global_duration_table ret.global_count_table = global_count_table ret.per_parent_pc_table = pc_table ret.global_pc_table = global_pc_table for period in per_period_stats.keys(): if self._analysis_conf._aggregate_by is not None and \ period not in self._analysis_conf._aggregate_by: continue ret.duration_values[period] = {} ret.count_values[period] = {} ret.pc_values[period] = {} ret.global_duration_values[period] = {} ret.global_count_values[period] = {} ret.global_pc_values[period] = {} for child in per_period_stats[period]._children.keys(): if self._args.select is not None and \ child not in self._args.select: continue c = per_period_stats[period]._children[child] if period not in c.parent_count.keys(): continue nogroup_c = not_grouped_per_period_stats[period] if per_period_stats[period].nr_periods == 0: global_duration_avg = 0 global_count_avg = 0 duration_avg = 0 count_avg = 0 pc_avg = 0 else: global_duration_avg = c.total / \ nogroup_c.nr_periods global_count_avg = c.total_count / \ nogroup_c.nr_periods duration_avg = c.total / c.parent_count[period] count_avg = c.total_count / c.parent_count[period] pc_avg = c.total_pc / c.parent_count[period] global_pc_avg = c.total_pc / \ nogroup_c.nr_periods if len(c.durations) > 2: duration_stdev = mi.Duration(statistics.stdev(c.durations)) count_stdev = mi.Number(statistics.stdev(c.count_array)) pc_stdev = mi.Number(statistics.stdev(c.pc_array)) else: duration_stdev = mi.Unknown() count_stdev = mi.Unknown() pc_stdev = mi.Unknown() # Make temporary copies in case we need to reuse the # original table afterwards. global_durations = c.durations.copy() global_count_array = c.count_array.copy() global_pc_array = c.pc_array.copy() # Save the raw values if we need them for the frequency # distributions ret.duration_values[period][child] = c.durations.copy() ret.count_values[period][child] = c.count_array.copy() ret.pc_values[period][child] = c.pc_array.copy() ret.global_duration_values[period][child] = c.durations.copy() ret.global_count_values[period][child] = c.count_array.copy() ret.global_pc_values[period][child] = c.pc_array.copy() if c.parent_count[period] < \ nogroup_c.nr_periods: global_min = 0 global_min_count = 0 global_min_pc = 0 for i in range(nogroup_c.nr_periods - c.parent_count[period]): global_durations.append(0) global_count_array.append(0) global_pc_array.append(0) ret.global_duration_values[period][child].append(0) ret.global_count_values[period][child].append(0) ret.global_pc_values[period][child].append(0) else: global_min = c.min global_min_count = c.min_count global_min_pc = c.min_pc if nogroup_c.nr_periods > 2: global_duration_stdev = mi.Duration( statistics.stdev(global_durations)) global_count_stdev = mi.Number( statistics.stdev(global_count_array)) global_pc_stdev = mi.Number( statistics.stdev(global_pc_array)) else: global_duration_stdev = mi.Unknown() global_count_stdev = mi.Unknown() global_pc_stdev = mi.Unknown() duration_table.append_row( name=mi.String(self._get_full_period_path(child)), parent=mi.String(self._get_full_period_path(period)), min_duration=mi.Duration(c.min), avg_duration=mi.Duration(duration_avg), max_duration=mi.Duration(c.max), stdev_duration=duration_stdev, ) count_table.append_row( name=mi.String(self._get_full_period_path(child)), parent=mi.String(self._get_full_period_path(period)), min=mi.Number(c.min_count), avg=mi.Number(count_avg), max=mi.Number(c.max_count), stdev=count_stdev, ) pc_table.append_row( name=mi.String(self._get_full_period_path(child)), parent=mi.String(self._get_full_period_path(period)), min=mi.Number(c.min_pc), avg=mi.Number(pc_avg), max=mi.Number(c.max_pc), stdev=pc_stdev, ) global_duration_table.append_row( name=mi.String(self._get_full_period_path(child)), parent=mi.String(self._get_full_period_path(period)), min_duration=mi.Duration(global_min), avg_duration=mi.Duration(global_duration_avg), max_duration=mi.Duration(c.max), stdev_duration=global_duration_stdev, ) global_count_table.append_row( name=mi.String(self._get_full_period_path(child)), parent=mi.String(self._get_full_period_path(period)), min=mi.Number(global_min_count), avg=mi.Number(global_count_avg), max=mi.Number(c.max_count), stdev=global_count_stdev, ) global_pc_table.append_row( name=mi.String(self._get_full_period_path(child)), parent=mi.String(self._get_full_period_path(period)), min=mi.Number(global_min_pc), avg=mi.Number(global_pc_avg), max=mi.Number(c.max_pc), stdev=global_pc_stdev, ) return ret def _find_filtered_uniform_freq_values(self, per_period_group_stats): for period in per_period_group_stats.keys(): table = per_period_group_stats[period] min, max, count, avg, total, total_list = \ self._get_filtered_min_max_count_avg_total_values( table.durations) min, max, step = self._find_uniform_freq_values(total_list, 1000, 'duration') # We only care about the last values return min, max, step def _get_grouped_by_period_stats_freq(self, begin_ns, end_ns, per_period_group_stats, group_prefix, freq_min, freq_max, freq_step): stats_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_PER_PERIOD_STATS, begin_ns, end_ns) freq_tables = [] freq_tables_by_period_name = {} for period in per_period_group_stats.keys(): if self._args.select is not None and \ period not in self._args.select: continue table = per_period_group_stats[period] min, max, count, avg, total, total_list = \ self._get_filtered_min_max_count_avg_total_values( table.durations) stdev = self._compute_period_duration_stdev_values(total_list) if math.isnan(stdev): stdev = mi.Unknown() else: stdev = mi.Duration(stdev) count = len(total_list) stats_table.append_row( name=mi.String(self._get_full_period_path(period)), count=mi.Number(count), min_duration=mi.Duration(min), avg_duration=mi.Duration(avg), max_duration=mi.Duration(max), stdev_duration=stdev, runtime=mi.Duration(total), ) subtitle = '{}Duration of period: {}'.format(group_prefix, period) tmp_table = self._get_one_freq_result_table( self._MI_TABLE_CLASS_FREQ_DURATION, begin_ns, end_ns, freq_min, freq_max, freq_step, total_list, subtitle, 1000) freq_tables.append(tmp_table) freq_tables_by_period_name[period] = tmp_table return stats_table, freq_tables, freq_tables_by_period_name def _get_per_period_stats_result_table(self, begin_ns, end_ns, period_tree): stats_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_PER_PERIOD_STATS, begin_ns, end_ns) period_stats_list = [] for parent in period_tree.keys(): self._get_ordered_period_stats_list(parent, period_stats_list, period_tree[parent]) for period_stats in period_stats_list: if not period_stats.period_list: continue if self._args.select is not None and \ period_stats.name not in self._args.select: continue if self._args.min_duration is None and \ self._args.max_duration is None: stdev = self._compute_period_duration_stdev( period_stats.period_list) min = period_stats.min_duration max = period_stats.max_duration count = period_stats.count total = period_stats.total_duration if count > 0: avg = period_stats.total_duration / \ period_stats.count else: avg = 0 else: min, max, count, avg, total, period_list = \ self._get_filtered_min_max_count_avg_total_flist( period_stats.period_list) if count == 0: continue stdev = self._compute_period_duration_stdev(period_list) if math.isnan(stdev): stdev = mi.Unknown() else: stdev = mi.Duration(stdev) stats_table.append_row( name=mi.String(self._get_full_period_path(period_stats.name)), count=mi.Number(count), min_duration=mi.Duration(min), avg_duration=mi.Duration(avg), max_duration=mi.Duration(max), stdev_duration=stdev, runtime=mi.Duration(total), ) return stats_table def _get_one_aggregated_log_table(self, begin_ns, end_ns, per_parent_aggregated_dict, sub, top): table = self._mi_create_result_table( self._MI_TABLE_CLASS_AGGREGATED_LOG, begin_ns, end_ns, subtitle=sub) for parent_period in per_parent_aggregated_dict.keys(): for child_period in \ per_parent_aggregated_dict[parent_period].keys(): child_period_list = \ per_parent_aggregated_dict[parent_period][child_period] min, max, count, avg, total, period_list = \ self._get_agg_filtered_min_max_count_avg_total_flist( child_period_list) stdev = self._compute_period_agg_duration_stdev( child_period_list) if math.isnan(stdev): stdev = mi.Unknown() else: stdev = mi.Duration(stdev) table.append_row( parent_begin_ts=mi.Timestamp( parent_period.start_ts), parent_end_ts=mi.Timestamp( parent_period.end_ts), parent_name=mi.String(parent_period.name), child_name=mi.String(self._get_full_period_path( child_period)), count=mi.Number(count), min_duration=mi.Duration(min), avg_duration=mi.Duration(avg), max_duration=mi.Duration(max), runtime=mi.Duration(total), parent_captures=mi.String(parent_period.full_captures()), stdev_duration=stdev, ) return table def _get_aggregated_log_table(self, begin_ns, end_ns, per_parent_aggregated_dict, aggregated_groups, top=False): result_tables = [] ag_list = "" for i in self._analysis_conf._select: if len(ag_list) == 0: ag_list = i else: ag_list = "%s, %s" % (ag_list, i) sub = "Aggregation of (%s) by %s" % ( ag_list, self._analysis_conf._aggregate_by) if aggregated_groups is None: table = self._get_one_aggregated_log_table( begin_ns, end_ns, per_parent_aggregated_dict, sub, top) result_tables.append(table) else: for group in aggregated_groups.keys(): group_sub = "%s, group: %s" % (sub, group) result_tables.append(self._get_one_aggregated_log_table( begin_ns, end_ns, aggregated_groups[group], group_sub, top)) return result_tables def _fill_freq_result_table(self, period_list, stats, min_duration, max_duration, step, freq_table): # The number of bins for the histogram resolution = self._args.freq_resolution if not self._args.freq_uniform: if self._args.min is not None: min_duration = self._args.min else: min_duration = stats.min if self._args.max is not None: max_duration = self._args.max else: max_duration = stats.max # ns to µs if min_duration is None: min_duration = 0 else: min_duration /= 1000 if max_duration is None: max_duration = 0 else: max_duration /= 1000 step = (max_duration - min_duration) / resolution if step == 0: return buckets = [] counts = [] for i in range(resolution): buckets.append(i * step) counts.append(0) for period_event in period_list: if not self._filter_event_duration(period_event): continue duration = period_event.duration / 1000 index = int((duration - min_duration) / step) if index >= resolution: # special case for max value: put in last bucket (includes # its upper bound) if duration == max_duration: counts[index - 1] += 1 continue counts[index] += 1 for index, count in enumerate(counts): lower_bound = index * step + min_duration upper_bound = (index + 1) * step + min_duration freq_table.append_row( lower=mi.Duration.from_us(lower_bound), upper=mi.Duration.from_us(upper_bound), count=mi.Number(count), ) def _fill_freq_result_table_values(self, values, min_duration, max_duration, step, freq_table, ratio): # Differ from _fill_freq_result_table because we work directly with # a list of values instead of periods. # The number of bins for the histogram resolution = self._args.freq_resolution if not self._args.freq_uniform: if self._args.min is not None: min_duration = self._args.min else: min_duration = min(values) / ratio if self._args.max is not None: max_duration = self._args.max else: max_duration = max(values) / ratio # ns to µs if min_duration is None: min_duration = 0 if max_duration is None: max_duration = 0 step = (max_duration - min_duration) / resolution if step == 0: return buckets = [] counts = [] for i in range(resolution): buckets.append(i * step) counts.append(0) for v in values: if not self._filter_duration(v): continue duration = v / ratio index = int((duration - min_duration) / step) if index < 0: raise ValueError('Invalid range, duration=%s, min=%s, max=%s,' ' step=%s, resolution=%s' % ( duration, min_duration, max_duration, step, resolution)) if index >= resolution: # special case for max value: put in last bucket (includes # its upper bound) if duration == max_duration: counts[index - 1] += 1 continue counts[index] += 1 for index, count in enumerate(counts): lower_bound = index * step + min_duration upper_bound = (index + 1) * step + min_duration freq_table.append_row( lower=mi.Duration.from_us(lower_bound), upper=mi.Duration.from_us(upper_bound), count=mi.Number(count), ) def _get_total_freq_result_tables(self, begin_ns, end_ns): freq_tables = [] period_lists, period_stats = self._get_total_period_lists_stats() min_duration = None max_duration = None step = None subtitle = 'All periods' if self._args.freq_uniform: durations = [] for period_list in period_lists: for period_event in period_list: if not self._filter_event_duration(period_event): continue durations.append(period_event.duration) min_duration, max_duration, step = \ self._find_uniform_freq_values(durations) for period_list in period_lists: freq_table = \ self._mi_create_result_table( self._MI_TABLE_CLASS_FREQ_DURATION, begin_ns, end_ns, subtitle) self._fill_freq_result_table(period_list, period_stats, min_duration, max_duration, step, freq_table) freq_tables.append(freq_table) return freq_tables def _get_per_group_freq_series_tables(self, begin_ns, end_ns, per_period_freq_group_by_tables, freq_tables_group_per_period_names): column_infos = [ ('duration_lower', 'Duration (lower bound)', mi.Duration), ('duration_upper', 'Duration (upper bound)', mi.Duration), ] unique_group_names = {} per_period_tables = {} for group in freq_tables_group_per_period_names.keys(): unique_group_names[group] = None column_infos.append(( # urgh, need to sanitize for the namedtuple, only alnum '{}'.format(re.sub(r'\W+', '', group)), # subtitle: '{}'.format(group), mi.Number, 'count')) for period in freq_tables_group_per_period_names[group]: title = 'Period \'%s\' duration frequency distribution ' \ 'per group' % (period) table_class = mi.TableClass(None, title, column_infos) result_table = mi.ResultTable(table_class, begin_ns, end_ns) per_period_tables[period] = result_table for period in per_period_tables.keys(): for i in range(self._args.freq_resolution): first_group = next(iter(unique_group_names)) row_tuple = [ freq_tables_group_per_period_names[first_group][period]. rows[i].lower, freq_tables_group_per_period_names[first_group][period]. rows[i].upper] for group in freq_tables_group_per_period_names.keys(): group_table = freq_tables_group_per_period_names[group] freq_row = group_table[period].rows[i] row_tuple.append(freq_row.count) per_period_tables[period].append_row_tuple(tuple(row_tuple)) return per_period_tables def _get_period_lists_stats(self): period_lists = {} period_stats = {} for period in self._analysis.all_period_stats.keys(): period_list = self._analysis.all_period_stats[period].period_list if not period_list: continue if self._args.min_duration is None and \ self._args.max_duration is None: stdev = self._compute_period_duration_stdev(period_list) count = len(period_list) min = self._analysis.all_period_stats[period].min_duration max = self._analysis.all_period_stats[period].max_duration total = \ self._analysis.all_period_stats[period].total_duration else: min, max, count, avg, total, period_list = \ self._get_filtered_min_max_count_avg_total_flist( period_list) stdev = self._compute_period_duration_stdev(period_list) period_stats[period] = _PeriodStats( count=count, min=min, max=max, stdev=stdev, total=total) period_lists[period] = period_list return period_lists, period_stats def _find_table_min_max_step(self, table, ratio, category): _min = None max = 0 # Find the uniform freq values across all parent/child combinations for period in table.keys(): for child in table[period].keys(): tmp_min, tmp_max, tmp_step = \ self._find_uniform_freq_values( table[period][child], ratio, category) if _min is None or tmp_min < _min: _min = tmp_min if tmp_max > max: max = tmp_max if _min is None: steps = 0 else: steps = (max - _min) / self._args.freq_resolution return _min, max, steps def _find_uniform_values(self, tables): if not self._args.freq_uniform: return None, None, None, None, None, None, \ None, None, None, None, None, None, \ None, None, None, None, None, None duration_min, duration_max, duration_step = \ self._find_table_min_max_step(tables.duration_values, 1000, 'duration') global_duration_min, global_duration_max, global_duration_step = \ self._find_table_min_max_step(tables.global_duration_values, 1000, 'global_duration') count_min, count_max, count_step = \ self._find_table_min_max_step(tables.count_values, 1, 'count') global_count_min, global_count_max, global_count_step = \ self._find_table_min_max_step(tables.global_count_values, 1, 'global_count') pc_min, pc_max, pc_step = \ self._find_table_min_max_step(tables.pc_values, 1, 'pc') global_pc_min, global_pc_max, global_pc_step = \ self._find_table_min_max_step(tables.global_pc_values, 1, 'global_pc') return duration_min, duration_max, duration_step, \ global_duration_min, global_duration_max, \ global_duration_step, \ count_min, count_max, count_step, \ global_count_min, global_count_max, \ global_count_step, \ pc_min, pc_max, pc_step, \ global_pc_min, global_pc_max, \ global_pc_step def _get_one_freq_result_table(self, mi_class, begin_ns, end_ns, min, max, step, values, subtitle, ratio=1): freq_table = \ self._mi_create_result_table(mi_class, begin_ns, end_ns, subtitle) self._fill_freq_result_table_values(values, min, max, step, freq_table, ratio) return freq_table def _get_per_parent_freq_result_table(self, begin_ns, end_ns, tables, group_prefix=''): duration_min, duration_max, duration_step, \ global_duration_min, global_duration_max, \ global_duration_step, \ count_min, count_max, count_step, \ global_count_min, global_count_max, \ global_count_step, \ pc_min, pc_max, pc_step, \ global_pc_min, global_pc_max, \ global_pc_step = self._find_uniform_values(tables) # sorted to get the same output order between runs for period in sorted(tables.duration_values.keys()): if self._analysis_conf._aggregate_by is not None and \ period not in self._analysis_conf._aggregate_by: continue for child in tables.duration_values[period].keys(): if self._args.select is not None and \ child not in self._args.select: continue subtitle = "%sDuration of %s per %s" % ( group_prefix, self._get_full_period_path(child), self._get_full_period_path(period)) # ratio=1000 for ns -> us tables.per_parent_freq_tables.append( self._get_one_freq_result_table( self._MI_TABLE_CLASS_FREQ_DURATION, begin_ns, end_ns, duration_min, duration_max, duration_step, tables.duration_values[period][child], subtitle, ratio=1000)) subtitle = "%sNumber of %s per %s" % ( group_prefix, self._get_full_period_path(child), self._get_full_period_path(period)) tables.per_parent_count_freq_tables.append( self._get_one_freq_result_table( self._MI_TABLE_CLASS_FREQ_COUNT, begin_ns, end_ns, count_min, count_max, count_step, tables.count_values[period][child], subtitle)) subtitle = "%sUsage ratio of %s per %s" % ( group_prefix, self._get_full_period_path(child), self._get_full_period_path(period)) tables.per_parent_pc_freq_tables.append( self._get_one_freq_result_table( self._MI_TABLE_CLASS_FREQ_PC, begin_ns, end_ns, pc_min, pc_max, pc_step, tables.pc_values[period][child], subtitle)) subtitle = "%sGlobal duration of %s per %s" % ( group_prefix, self._get_full_period_path(child), self._get_full_period_path(period)) # ratio=1000 for ns -> us tables.global_duration_freq_tables.append( self._get_one_freq_result_table( self._MI_TABLE_CLASS_FREQ_DURATION, begin_ns, end_ns, global_duration_min, global_duration_max, global_duration_step, tables.global_duration_values[period][child], subtitle, ratio=1000)) subtitle = "%sGlobal number of %s per %s" % ( group_prefix, self._get_full_period_path(child), self._get_full_period_path(period)) tables.global_count_freq_tables.append( self._get_one_freq_result_table( self._MI_TABLE_CLASS_FREQ_COUNT, begin_ns, end_ns, global_count_min, global_count_max, global_count_step, tables.global_count_values[period][child], subtitle)) subtitle = "%sGlobal usage ratio of %s per %s" % ( group_prefix, self._get_full_period_path(child), self._get_full_period_path(period)) tables.global_pc_freq_tables.append( self._get_one_freq_result_table( self._MI_TABLE_CLASS_FREQ_PC, begin_ns, end_ns, global_pc_min, global_pc_max, global_pc_step, tables.global_pc_values[period][child], subtitle)) def _get_per_period_freq_result_tables(self, begin_ns, end_ns): freq_tables = [] period_lists, period_stats = self._get_period_lists_stats() min_duration = None max_duration = None step = None if self._args.freq_uniform: durations = [] for period_list in period_lists.values(): for period_event in period_list: if not self._filter_event_duration(period_event): continue durations.append(period_event.duration) min_duration, max_duration, step = \ self._find_uniform_freq_values(durations) for period in sorted(period_stats.keys()): if self._args.select is not None and \ period not in self._args.select: continue period_list = period_lists[period] stats = period_stats[period] subtitle = 'Duration of period: {}'.format(period) freq_table = \ self._mi_create_result_table( self._MI_TABLE_CLASS_FREQ_DURATION, begin_ns, end_ns, subtitle) self._fill_freq_result_table(period_list, stats, min_duration, max_duration, step, freq_table) freq_tables.append(freq_table) return freq_tables def _compute_period_duration_stdev_values(self, durations): period_durations = [] for d in durations: if not self._filter_duration(d): continue period_durations.append(d) if len(period_durations) < 2: return float('nan') return statistics.stdev(period_durations) def _compute_period_duration_stdev(self, period_events): period_durations = [] for period_event in period_events: if not self._filter_event_duration(period_event): continue period_durations.append(period_event.duration) if len(period_durations) < 2: return float('nan') return statistics.stdev(period_durations) def _compute_period_agg_duration_stdev(self, period_agg_events): period_durations = [] for period_event in period_agg_events: if not self._filter_event_duration(period_event.event): continue period_durations.append(period_event.event.duration) if len(period_durations) < 2: return float('nan') return statistics.stdev(period_durations) def _pop_next_capture_string(self, begin_captures, end_captures): if len(begin_captures.keys()) > 0: b_key, b_value = begin_captures.popitem() b_string = '%s = %s' % (b_key, b_value) else: b_string = '' if len(end_captures.keys()) > 0: e_key, e_value = end_captures.popitem() e_string = '%s = %s' % (e_key, e_value) else: e_string = '' return b_string, e_string def _print_period_events(self, result_table): fmt = '[{:<18}, {:<18}] {:>15} {:<24} {:<35} {:<35}' fmt_captures = '{:<18} {:<18} {:>18} {:<24} {:<35} {:<35}' title_fmt = '{:<20} {:<19} {:>15} {:<24} {:<35} {:<35}' print() print(result_table.title) print(title_fmt.format('Begin', 'End', 'Duration (us)', 'Name', 'Begin capture', 'End capture')) for row in result_table.rows: begin_ts = row.begin_ts.value end_ts = row.end_ts.value duration = row.duration.value name = row.name.value if name is None: name = '' # Convert back the string to dict begin_captures = ast.literal_eval(row.begin_captures.value) # Order the dict based on keys to always get the same output if begin_captures is None: begin_captures = {} begin_captures = collections.OrderedDict( sorted(begin_captures.items())) end_captures = ast.literal_eval(row.end_captures.value) if end_captures is None: end_captures = {} end_captures = collections.OrderedDict( sorted(end_captures.items())) b_string, e_string = self._pop_next_capture_string(begin_captures, end_captures) print(fmt.format(self._format_timestamp(begin_ts), self._format_timestamp(end_ts), '%0.03f' % (duration / 1000), name, b_string, e_string)) nr_lines = max(len(begin_captures.keys()), len(end_captures.keys())) for i in range(nr_lines): b_string, e_string = self._pop_next_capture_string( begin_captures, end_captures) print(fmt_captures.format('', '', '', '', b_string, e_string)) def _print_aggregated_period_events(self, result_tables): fmt = '[{:<18}, {:<18}] {:>22} {:<15} [{:<18}, {:<18}] {:>22} ' \ '{:<15} {:<35}' # fmt_captures = '{:<18} {:<18} {:>25} {:<15} {:<18} {:<25} {:>18} ' \ # '{:<15} {:<35}' title_fmt = '{:<20} {:<19} {:>22} {:<15} {:<20} {:<19} {:>22} ' \ '{:<15} {:<35}' for result_table in result_tables: print() print(result_table.title) print(result_table.subtitle) print(title_fmt.format('Parent begin', 'Parent end', 'Parent duration (us)', 'Parent name', 'Child begin', 'Child end', 'Child duration (us)', 'Child name', 'Captures')) for row in result_table.rows: parent_begin_ts = row.parent_begin_ts.value parent_end_ts = row.parent_end_ts.value parent_duration = row.parent_duration.value parent_name = row.parent_name.value child_begin_ts = row.child_begin_ts.value child_end_ts = row.child_end_ts.value child_duration = row.child_duration.value child_name = row.child_name.value # Convert back the string to list of tuple captures = ast.literal_eval(row.captures.value) # Order the dict based on keys to always get the same output # if captures is None: # captures = [] # capture_str = '' # else: # captures = sorted(captures, key=lambda x: x[0]) # captures.reverse() # tmp = captures.pop() # capture_str = "%s = %s" % (tmp[0], tmp[1]) capture_str = '' for i in sorted(captures, key=lambda x: x[0]): if len(capture_str) == 0: capture_str = "%s = %s" % (i[0], i[1]) else: capture_str = "%s, %s = %s" % (capture_str, i[0], i[1]) print(fmt.format(self._format_timestamp(parent_begin_ts), self._format_timestamp(parent_end_ts), '%0.03f' % (parent_duration / 1000), parent_name, self._format_timestamp(child_begin_ts), self._format_timestamp(child_end_ts), '%0.03f' % (child_duration / 1000), child_name, capture_str)) # for i in range(len(captures)): # tmp = captures.pop() # capture_str = "%s = %s" % (tmp[0], tmp[1]) # print(fmt_captures.format('', '', '', '', '', '', '', '', # capture_str)) def _print_total_stats(self, stats_table): row_format = '{:<12} {:<12} {:<12} {:<12} {:<12}' header = row_format.format( 'Count', 'Min', 'Avg', 'Max', 'Stdev' ) if stats_table.rows: print() print(stats_table.title + ' (us)') print(header) for row in stats_table.rows: if type(row.stdev_duration) is mi.Unknown: stdev_str = '?' else: stdev_str = '%0.03f' % row.stdev_duration.to_us() row_str = row_format.format( '%d' % row.count.value, '%0.03f' % row.min_duration.to_us(), '%0.03f' % row.avg_duration.to_us(), '%0.03f' % row.max_duration.to_us(), '%s' % stdev_str, ) print(row_str) def _print_period_tree(self, period_tree, level): for parent in period_tree.keys(): if level == 0: lines = '' else: lines = "%s|-- " % ((level - 1) * 4 * ' ') print("%s%s" % (lines, parent)) for child in period_tree[parent]: self._print_period_tree(period_tree[parent], level + 1) def _print_per_period_stats(self, stats_table, period_tree): row_format = '{:<25} {:>8} {:>12} {:>12} {:>12} {:>12} {:>12}' header = row_format.format( 'Period', 'Count', 'Min', 'Avg', 'Max', 'Stdev', 'Runtime' ) print("Period tree:") self._print_period_tree(period_tree, 0) if stats_table.rows: print() print(stats_table.title + ' (us)') print(header) for row in stats_table.rows: if type(row.stdev_duration) is mi.Unknown: stdev_str = '?' else: stdev_str = '%0.03f' % row.stdev_duration.to_us() row_str = row_format.format( '%s' % row.name, '%d' % row.count.value, '%0.03f' % row.min_duration.to_us(), '%0.03f' % row.avg_duration.to_us(), '%0.03f' % row.max_duration.to_us(), '%s' % stdev_str, '%0.03f' % row.runtime.to_us(), ) print(row_str) def _print_per_parent_stats(self, table): row_format = '{:<25} {:<25} {:>12} {:>12} {:>12} {:>12}' header = row_format.format( 'Period', 'Parent', 'Min', 'Avg', 'Max', 'Stdev' ) if table.rows: print() print(table.title + ' (us)') print(table.subtitle) print(header) for row in table.rows: if type(row.stdev_duration) is mi.Unknown: stdev_str = '?' else: stdev_str = '%0.03f' % row.stdev_duration.to_us() row_str = row_format.format( '%s' % row.name, '%s' % row.parent, '%0.03f' % row.min_duration.to_us(), '%0.03f' % row.avg_duration.to_us(), '%0.03f' % row.max_duration.to_us(), '%s' % stdev_str, ) print(row_str) def _print_per_parent_count(self, table): row_format = '{:<25} {:<25} {:>12} {:>12} {:>12} {:>12}' header = row_format.format( 'Period', 'Parent', 'Min', 'Avg', 'Max', 'Stdev' ) if table.rows: print() print(table.title) print(table.subtitle) print(header) for row in table.rows: if type(row.stdev) is mi.Unknown: stdev_str = '?' else: stdev_str = "%0.03f" % row.stdev.value row_str = row_format.format( '%s' % row.name, '%s' % row.parent, '%d' % row.min.value, '%0.03f' % row.avg.value, '%d' % row.max.value, '%s' % stdev_str, ) print(row_str) def _print_per_parent_pc(self, table): row_format = '{:<25} {:<25} {:>12} {:>12} {:>12} {:>12}' header = row_format.format( 'Period', 'Parent', 'Min', 'Avg', 'Max', 'Stdev' ) if table.rows: print() print(table.title + ' (%)') print(table.subtitle) print(header) for row in table.rows: if type(row.stdev) is mi.Unknown: stdev_str = '?' else: stdev_str = "%0.03f" % row.stdev.value row_str = row_format.format( '%s' % row.name, '%s' % row.parent, '%d' % row.min.value, '%0.03f' % row.avg.value, '%d' % row.max.value, '%s' % stdev_str, ) print(row_str) def _one_line_captures(self, capture_tuple_list): capture_str = None for item in capture_tuple_list: if capture_str is None: capture_str = "%s = %s" % (item[0], item[1]) continue capture_str = "%s, %s = %s" % (capture_str, item[0], item[1]) return capture_str def _print_aggregated_log(self, stats_tables): fmt = '[{:<18}, {:<18}] {:>18} {:<15} {:<24} {:>12} | {:>10} ' \ '{:>12} {:>12} {:>12} {:>13} | {:>12}' title_fmt = '{:<20} {:<19} {:>18} {:<15} {:<24} {:>12} | {:>10} ' \ '{:>12} {:>12} {:>13} {:>12} | {:>12}' high_title_fmt = '{:<35} Parent {:<32} | {:<35} | {:<25} ' \ 'Durations (us) {:<22} |' for stats_table in stats_tables: print() print(stats_table.title) print(stats_table.subtitle) print(high_title_fmt.format('', '', '', '', '')) print(title_fmt.format('Begin', 'End', 'Duration (us)', 'Name', '| Child name', 'Count', 'Min', 'Avg', 'Max', 'Stdev', 'Runtime', 'Parent captures')) for row in stats_table.rows: parent_begin_ts = row.parent_begin_ts.value parent_end_ts = row.parent_end_ts.value parent_duration = parent_end_ts - parent_begin_ts parent_name = row.parent_name.value child_name = row.child_name.value captures = self._one_line_captures(row.parent_captures.value) if type(row.stdev_duration) is mi.Unknown: stdev_str = '?' else: stdev_str = '%0.03f' % row.stdev_duration.to_us() row_str = fmt.format( self._format_timestamp(parent_begin_ts), self._format_timestamp(parent_end_ts), '%0.03f' % (parent_duration / 1000), parent_name, '| %s' % child_name, '%d' % row.count.value, '%0.03f' % row.min_duration.to_us(), '%0.03f' % row.avg_duration.to_us(), '%0.03f' % row.max_duration.to_us(), '%s' % stdev_str, '%0.03f' % row.runtime.to_us(), '%s' % captures, ) print(row_str) def _print_frequency_distribution(self, freq_table, unit): title_fmt = '{} - {}' graph = termgraph.FreqGraph( data=freq_table.rows, get_value=lambda row: row.count.value, get_lower_bound=lambda row: row.lower.to_us(), title=title_fmt.format(freq_table.title, freq_table.subtitle), unit=unit, ) graph.print_graph() def _print_freq(self, freq_tables, unit): for freq_table in freq_tables: self._print_frequency_distribution(freq_table, unit) def _cleanup_period_name(self, name): # If a period name is given with its hierarchy, only keep the last # member, period names must be unique, so we don't need to scope them, # but since we output the periods with their full hierarchy, we have # to support it in entry as well. return name.split('/')[-1] def _validate_transform_args(self): args = self._args self._analysis_conf._group_by = {} self._analysis_conf._aggregate_by = None self._analysis_conf._select = [] self._analysis_conf._order_by = None if args.group_by: for group in args.group_by.split(','): g = group.strip() if len(g) == 0: continue _period_name = self._cleanup_period_name(g.split('.')[0]) _period_field = g.split('.')[1] if _period_name not in \ self._analysis_conf._group_by.keys(): self._analysis_conf._group_by[_period_name] = [] self._analysis_conf._group_by[_period_name]. \ append(_period_field) if args.order_by: if args.order_by not in ['time', 'hierarchy']: self._gen_error("Invalid order-by value") self._analysis_conf._order_by = args.order_by # TODO: check aggregation and group-by attributes are valid if args.select: for ag in args.select.split(','): self._analysis_conf._select.append( self._cleanup_period_name(ag).strip()) if args.aggregate_by is not None: self._analysis_conf._aggregate_by = self._cleanup_period_name( args.aggregate_by) def _add_arguments(self, ap): Command._add_min_max_args(ap) Command._add_freq_args( ap, help='Output statistics about periods durations') Command._add_top_args(ap, help='Output the top sched switch durations') Command._add_log_args( ap, help='Output the sched switches in chronological order') Command._add_stats_args(ap, help='Output sched switch statistics') ap.add_argument('--min-duration', type=float, help='Filter out, periods shorter that duration ' '(usec)') ap.add_argument('--max-duration', type=float, help='Filter out, periods longer than duration (usec)') ap.add_argument('--aggregate-by', type=str, help='FIXME') ap.add_argument('--select', type=str, help='FIXME') ap.add_argument('--group-by', type=str, help='Present the results grouped by a list of fields' '(period.captured_field' '[, period.captured_field2])') ap.add_argument('--order-by', type=str, help='hierarchy, time') def _run(mi_mode): schedcmd = PeriodAnalysisCommand(mi_mode=mi_mode) schedcmd.run() def _runstats(mi_mode): sys.argv.insert(1, '--stats') _run(mi_mode) def _runlog(mi_mode): sys.argv.insert(1, '--log') _run(mi_mode) def _runtop(mi_mode): sys.argv.insert(1, '--top') _run(mi_mode) def _runfreq(mi_mode): sys.argv.insert(1, '--freq') _run(mi_mode) def runstats(): _runstats(mi_mode=False) def runlog(): _runlog(mi_mode=False) def runtop(): _runtop(mi_mode=False) def runfreq(): _runfreq(mi_mode=False) def runstats_mi(): _runstats(mi_mode=True) def runlog_mi(): _runlog(mi_mode=True) def runtop_mi(): _runtop(mi_mode=True) def runfreq_mi(): _runfreq(mi_mode=True) lttnganalyses-0.6.1/lttnganalyses/cli/syscallstats.py0000664000175000017500000002200712745737273024642 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # 2015 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import errno import operator import statistics from . import mi from ..core import syscalls from .command import Command class SyscallsAnalysis(Command): _DESC = """The syscallstats command.""" _ANALYSIS_CLASS = syscalls.SyscallsAnalysis _MI_TITLE = 'System call statistics' _MI_DESCRIPTION = 'Per-TID and global system call statistics' _MI_TAGS = [mi.Tags.SYSCALL, mi.Tags.STATS] _MI_TABLE_CLASS_PER_TID_STATS = 'per-tid' _MI_TABLE_CLASS_TOTAL = 'total' _MI_TABLE_CLASS_SUMMARY = 'summary' _MI_TABLE_CLASSES = [ ( _MI_TABLE_CLASS_PER_TID_STATS, 'System call statistics', [ ('syscall', 'System call', mi.Syscall), ('count', 'Call count', mi.Number, 'calls'), ('min_duration', 'Minimum call duration', mi.Duration), ('avg_duration', 'Average call duration', mi.Duration), ('max_duration', 'Maximum call duration', mi.Duration), ('stdev_duration', 'Call duration standard deviation', mi.Duration), ('return_values', 'Return values count', mi.String), ] ), ( _MI_TABLE_CLASS_TOTAL, 'Per-TID system call statistics', [ ('process', 'Process', mi.Process), ('count', 'Total system call count', mi.Number, 'calls'), ] ), ( _MI_TABLE_CLASS_SUMMARY, 'System call statistics - summary', [ ('time_range', 'Time range', mi.TimeRange), ('process', 'Process', mi.Process), ('count', 'Total system call count', mi.Number, 'calls'), ] ), ] def _analysis_tick(self, period_data, end_ns): if period_data is None: return begin_ns = period_data.period.begin_evt.timestamp total_table, per_tid_tables = self._get_result_tables(period_data, begin_ns, end_ns) if self._mi_mode: self._mi_append_result_tables(per_tid_tables) self._mi_append_result_table(total_table) else: self._print_date(begin_ns, end_ns) self._print_results(total_table, per_tid_tables) def _post_analysis(self): if not self._mi_mode: return if len(self._mi_get_result_tables(self._MI_TABLE_CLASS_TOTAL)) > 1: self._create_summary_result_table() self._mi_print() def _create_summary_result_table(self): total_tables = self._mi_get_result_tables(self._MI_TABLE_CLASS_TOTAL) begin = total_tables[0].timerange.begin.value end = total_tables[-1].timerange.end.value summary_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_SUMMARY, begin, end) for total_table in total_tables: for row in total_table.rows: process = row.process count = row.count summary_table.append_row( time_range=total_table.timerange, process=process, count=count, ) self._mi_clear_result_tables() self._mi_append_result_table(summary_table) def _get_result_tables(self, period_data, begin_ns, end_ns): per_tid_tables = [] total_table = self._mi_create_result_table(self._MI_TABLE_CLASS_TOTAL, begin_ns, end_ns) for proc_stats in sorted(period_data.tids.values(), key=operator.attrgetter('total_syscalls'), reverse=True): if proc_stats.total_syscalls == 0: continue pid = proc_stats.pid if proc_stats.pid is None: pid = '?' subtitle = '%s (%s, TID: %d)' % (proc_stats.comm, pid, proc_stats.tid) result_table = \ self._mi_create_result_table( self._MI_TABLE_CLASS_PER_TID_STATS, begin_ns, end_ns, subtitle) for syscall in sorted(proc_stats.syscalls.values(), key=operator.attrgetter('count'), reverse=True): durations = [] return_count = {} for syscall_event in syscall.syscalls_list: durations.append(syscall_event.duration) if syscall_event.ret >= 0: return_key = 'success' else: try: return_key = errno.errorcode[-syscall_event.ret] except KeyError: return_key = str(syscall_event.ret) if return_key not in return_count: return_count[return_key] = 1 return_count[return_key] += 1 if len(durations) > 2: stdev = mi.Duration(statistics.stdev(durations)) else: stdev = mi.Unknown() result_table.append_row( syscall=mi.Syscall(syscall.name), count=mi.Number(syscall.count), min_duration=mi.Duration(syscall.min_duration), avg_duration=mi.Duration(syscall.total_duration / syscall.count), max_duration=mi.Duration(syscall.max_duration), stdev_duration=stdev, return_values=mi.String(str(return_count)), ) per_tid_tables.append(result_table) total_table.append_row( process=mi.Process(proc_stats.comm, pid=proc_stats.pid, tid=proc_stats.tid), count=mi.Number(proc_stats.total_syscalls), ) return total_table, per_tid_tables def _print_results(self, total_table, per_tid_tables): line_format = '{:<38} {:>14} {:>14} {:>14} {:>12} {:>10} {:<14}' print('Per-TID syscalls statistics (usec)') total_calls = 0 for total_row, table in zip(total_table.rows, per_tid_tables): print(line_format.format(table.subtitle, 'Count', 'Min', 'Average', 'Max', 'Stdev', 'Return values')) for row in table.rows: syscall_name = row.syscall.name syscall_count = row.count.value min_duration = round(row.min_duration.to_us(), 3) avg_duration = round(row.avg_duration.to_us(), 3) max_duration = round(row.max_duration.to_us(), 3) if type(row.stdev_duration) is mi.Unknown: stdev = '?' else: stdev = round(row.stdev_duration.to_us(), 3) proc_total_calls = total_row.count.value print(line_format.format( ' - ' + syscall_name, syscall_count, min_duration, avg_duration, max_duration, stdev, row.return_values.value)) print(line_format.format('Total:', proc_total_calls, '', '', '', '', '')) print('-' * 113) total_calls += proc_total_calls print('\nTotal syscalls: %d' % (total_calls)) def _add_arguments(self, ap): Command._add_proc_filter_args(ap) def _run(mi_mode): syscallscmd = SyscallsAnalysis(mi_mode=mi_mode) syscallscmd.run() # entry point (human) def run(): _run(mi_mode=False) # entry point (MI) def run_mi(): _run(mi_mode=True) lttnganalyses-0.6.1/lttnganalyses/cli/io.py0000664000175000017500000013426112746220524022511 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # 2015 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import collections import operator import statistics import sys from . import mi from . import termgraph from ..core import io from ..common import format_utils from .command import Command _UsageTables = collections.namedtuple('_UsageTables', [ 'per_proc_read', 'per_proc_write', 'per_file_read', 'per_file_write', 'per_proc_block_read', 'per_proc_block_write', 'per_disk_sector', 'per_disk_request', 'per_disk_rtps', 'per_netif_recv', 'per_netif_send', ]) class IoAnalysisCommand(Command): _DESC = """The I/O command.""" _ANALYSIS_CLASS = io.IoAnalysis _MI_TITLE = 'I/O analysis' _MI_DESCRIPTION = 'System call/disk latency statistics, system call ' + \ 'latency distribution, system call top latencies, ' + \ 'I/O usage top, and I/O operations log' _MI_TAGS = [ mi.Tags.IO, mi.Tags.SYSCALL, mi.Tags.STATS, mi.Tags.FREQ, mi.Tags.LOG, mi.Tags.TOP, ] _MI_TABLE_CLASS_SYSCALL_LATENCY_STATS = 'syscall-latency-stats' _MI_TABLE_CLASS_PART_LATENCY_STATS = 'disk-latency-stats' _MI_TABLE_CLASS_FREQ = 'freq' _MI_TABLE_CLASS_TOP_SYSCALL = 'top-syscall' _MI_TABLE_CLASS_LOG = 'log' _MI_TABLE_CLASS_PER_PROCESS_TOP = 'per-process-top' _MI_TABLE_CLASS_PER_FILE_TOP = 'per-file-top' _MI_TABLE_CLASS_PER_PROCESS_TOP_BLOCK = 'per-process-top-block' _MI_TABLE_CLASS_PER_DISK_TOP_SECTOR = 'per-disk-top-sector' _MI_TABLE_CLASS_PER_DISK_TOP_REQUEST = 'per-disk-top-request' _MI_TABLE_CLASS_PER_DISK_TOP_RTPS = 'per-disk-top-rps' _MI_TABLE_CLASS_PER_NETIF_TOP = 'per-netif-top' _MI_TABLE_CLASSES = [ ( _MI_TABLE_CLASS_SYSCALL_LATENCY_STATS, 'System call latency statistics', [ ('obj', 'System call category', mi.String), ('count', 'Call count', mi.Number, 'calls'), ('min_latency', 'Minimum call latency', mi.Duration), ('avg_latency', 'Average call latency', mi.Duration), ('max_latency', 'Maximum call latency', mi.Duration), ('stdev_latency', 'System call latency standard deviation', mi.Duration), ] ), ( _MI_TABLE_CLASS_PART_LATENCY_STATS, 'Partition latency statistics', [ ('obj', 'Partition', mi.Disk), ('count', 'Access count', mi.Number, 'accesses'), ('min_latency', 'Minimum access latency', mi.Duration), ('avg_latency', 'Average access latency', mi.Duration), ('max_latency', 'Maximum access latency', mi.Duration), ('stdev_latency', 'System access latency standard deviation', mi.Duration), ] ), ( _MI_TABLE_CLASS_FREQ, 'I/O request latency distribution', [ ('latency_lower', 'Latency (lower bound)', mi.Duration), ('latency_upper', 'Latency (upper bound)', mi.Duration), ('count', 'Request count', mi.Number, 'requests'), ] ), ( _MI_TABLE_CLASS_TOP_SYSCALL, 'Top system call latencies', [ ('time_range', 'Call time range', mi.TimeRange), ('out_of_range', 'System call out of range?', mi.Boolean), ('duration', 'Call duration', mi.Duration), ('syscall', 'System call', mi.Syscall), ('size', 'Read/write size', mi.Size), ('process', 'Process', mi.Process), ('path', 'File path', mi.Path), ('fd', 'File descriptor', mi.Fd), ] ), ( _MI_TABLE_CLASS_LOG, 'I/O operations log', [ ('time_range', 'Call time range', mi.TimeRange), ('out_of_range', 'System call out of range?', mi.Boolean), ('duration', 'Call duration', mi.Duration), ('syscall', 'System call', mi.Syscall), ('size', 'Read/write size', mi.Size), ('process', 'Process', mi.Process), ('path', 'File path', mi.Path), ('fd', 'File descriptor', mi.Fd), ] ), ( _MI_TABLE_CLASS_PER_PROCESS_TOP, 'Per-process top I/O operations', [ ('process', 'Process', mi.Process), ('size', 'Total operations size', mi.Size), ('disk_size', 'Disk operations size', mi.Size), ('net_size', 'Network operations size', mi.Size), ('unknown_size', 'Unknown operations size', mi.Size), ] ), ( _MI_TABLE_CLASS_PER_FILE_TOP, 'Per-file top I/O operations', [ ('path', 'File path/info', mi.Path), ('size', 'Operations size', mi.Size), ('fd_owners', 'File descriptor owners', mi.String), ] ), ( _MI_TABLE_CLASS_PER_PROCESS_TOP_BLOCK, 'Per-process top block I/O operations', [ ('process', 'Process', mi.Process), ('size', 'Operations size', mi.Size), ] ), ( _MI_TABLE_CLASS_PER_DISK_TOP_SECTOR, 'Per-disk top sector I/O operations', [ ('disk', 'Disk', mi.Disk), ('count', 'Sector count', mi.Number, 'sectors'), ] ), ( _MI_TABLE_CLASS_PER_DISK_TOP_REQUEST, 'Per-disk top I/O requests', [ ('disk', 'Disk', mi.Disk), ('count', 'Request count', mi.Number, 'I/O requests'), ] ), ( _MI_TABLE_CLASS_PER_DISK_TOP_RTPS, 'Per-disk top I/O request time/sector', [ ('disk', 'Disk', mi.Disk), ('rtps', 'Request time/sector', mi.Duration), ] ), ( _MI_TABLE_CLASS_PER_NETIF_TOP, 'Per-network interface top I/O operations', [ ('netif', 'Network interface', mi.NetIf), ('size', 'Operations size', mi.Size), ] ), ] _LATENCY_STATS_FORMAT = '{:<14} {:>14} {:>14} {:>14} {:>14} {:>14}' _SECTION_SEPARATOR_STRING = '-' * 89 def _analysis_tick(self, period_data, end_ns): if period_data is None: return begin_ns = period_data.period.begin_evt.timestamp syscall_latency_stats_table = None disk_latency_stats_table = None freq_tables = None top_tables = None log_table = None usage_tables = None if self._args.stats: syscall_latency_stats_table, disk_latency_stats_table = \ self._get_latency_stats_result_tables(period_data, begin_ns, end_ns) if self._args.freq: freq_tables = self._get_freq_result_tables(period_data, begin_ns, end_ns) if self._args.usage: usage_tables = self._get_usage_result_tables(period_data, begin_ns, end_ns) if self._args.top: top_tables = self._get_top_result_tables(period_data, begin_ns, end_ns) if self._args.log: log_table = self._get_log_result_table(period_data, begin_ns, end_ns) if self._mi_mode: self._mi_append_result_tables([ log_table, syscall_latency_stats_table, disk_latency_stats_table, ]) self._mi_append_result_tables(top_tables) self._mi_append_result_tables(usage_tables) self._mi_append_result_tables(freq_tables) else: self._print_date(begin_ns, end_ns) if self._args.usage: self._print_usage(usage_tables) if self._args.stats: self._print_latency_stats(syscall_latency_stats_table, disk_latency_stats_table) if self._args.top: self._print_top(top_tables) if self._args.freq: self._print_freq(freq_tables) if self._args.log: self._print_log(log_table) def _create_summary_result_tables(self): # TODO: create a summary table here self._mi_clear_result_tables() # Filter predicates def _filter_size(self, size): if size is None: return True if self._args.maxsize is not None and size > self._args.maxsize: return False if self._args.minsize is not None and size < self._args.minsize: return False return True def _filter_latency(self, duration): if self._args.max is not None and duration > self._args.max: return False if self._args.min is not None and duration < self._args.min: return False return True def _filter_time_range(self, begin, end): # Note: we only want to return False only when a request has # ended and is completely outside the timerange (i.e. begun # after the end of the time range). return not ( self._analysis_conf.begin_ts and self._analysis_conf.end_ts and end and begin > self._analysis_conf.end_ts ) def _filter_io_request(self, io_rq): return self._filter_size(io_rq.size) and \ self._filter_latency(io_rq.duration) and \ self._filter_time_range(io_rq.begin_ts, io_rq.end_ts) def _is_io_rq_out_of_range(self, io_rq): return ( self._analysis_conf.begin_ts and io_rq.begin_ts < self._analysis_conf.begin_ts or self._analysis_conf.end_ts and io_rq.end_ts > self._analysis_conf.end_ts ) def _append_per_proc_read_usage_row(self, period_data, proc_stats, result_table): result_table.append_row( process=mi.Process(proc_stats.comm, pid=proc_stats.pid, tid=proc_stats.tid), size=mi.Size(proc_stats.total_read), disk_size=mi.Size(proc_stats.disk_io.read), net_size=mi.Size(proc_stats.net_io.read), unknown_size=mi.Size(proc_stats.unk_io.read), ) return True def _append_per_proc_write_usage_row(self, period_data, proc_stats, result_table): result_table.append_row( process=mi.Process(proc_stats.comm, pid=proc_stats.pid, tid=proc_stats.tid), size=mi.Size(proc_stats.total_write), disk_size=mi.Size(proc_stats.disk_io.write), net_size=mi.Size(proc_stats.net_io.write), unknown_size=mi.Size(proc_stats.unk_io.write), ) return True def _append_per_proc_block_read_usage_row(self, period_data, proc_stats, result_table): if proc_stats.block_io.read == 0: return False if proc_stats.comm: proc_name = proc_stats.comm else: proc_name = None result_table.append_row( process=mi.Process(proc_name, pid=proc_stats.pid, tid=proc_stats.tid), size=mi.Size(proc_stats.block_io.read), ) return True def _append_per_proc_block_write_usage_row(self, period_data, proc_stats, result_table): if proc_stats.block_io.write == 0: return False if proc_stats.comm: proc_name = proc_stats.comm else: proc_name = None result_table.append_row( process=mi.Process(proc_name, pid=proc_stats.pid, tid=proc_stats.tid), size=mi.Size(proc_stats.block_io.write), ) return True def _append_disk_sector_usage_row(self, period_data, disk_stats, result_table): if disk_stats.total_rq_sectors == 0: return None result_table.append_row( disk=mi.Disk(disk_stats.diskname), count=mi.Number(disk_stats.total_rq_sectors), ) return True def _append_disk_request_usage_row(self, period_data, disk_stats, result_table): if disk_stats.rq_count == 0: return False result_table.append_row( disk=mi.Disk(disk_stats.diskname), count=mi.Number(disk_stats.rq_count), ) return True def _append_disk_rtps_usage_row(self, period_data, disk_stats, result_table): if disk_stats.rq_count == 0: return False avg_latency = (disk_stats.total_rq_duration / disk_stats.rq_count) result_table.append_row( disk=mi.Disk(disk_stats.diskname), rtps=mi.Duration(avg_latency), ) return True def _append_netif_recv_usage_row(self, period_data, netif_stats, result_table): result_table.append_row( netif=mi.NetIf(netif_stats.name), size=mi.Size(netif_stats.recv_bytes) ) return True def _append_netif_send_usage_row(self, period_data, netif_stats, result_table): result_table.append_row( netif=mi.NetIf(netif_stats.name), size=mi.Size(netif_stats.sent_bytes) ) return True def _get_file_stats_fd_owners_str(self, period_data, file_stats): fd_by_pid_str = '' for pid, fd in file_stats.fd_by_pid.items(): comm = period_data.tids[pid].comm fd_by_pid_str += 'fd %d in %s (%s) ' % (fd, comm, pid) return fd_by_pid_str def _append_file_read_usage_row(self, period_data, file_stats, result_table): if file_stats.io.read == 0: return False fd_owners = self._get_file_stats_fd_owners_str(period_data, file_stats) result_table.append_row( path=mi.Path(file_stats.filename), size=mi.Size(file_stats.io.read), fd_owners=mi.String(fd_owners), ) return True def _append_file_write_usage_row(self, period_data, file_stats, result_table): if file_stats.io.write == 0: return False fd_owners = self._get_file_stats_fd_owners_str(period_data, file_stats) result_table.append_row( path=mi.Path(file_stats.filename), size=mi.Size(file_stats.io.write), fd_owners=mi.String(fd_owners), ) return True def _fill_usage_result_table(self, period_data, input_list, append_row_cb, result_table): count = 0 limit = self._args.limit for elem in input_list: if append_row_cb(period_data, elem, result_table): count += 1 if limit is not None and count >= limit: break def _fill_per_process_read_usage_result_table(self, period_data, result_table): input_list = sorted(period_data.tids.values(), key=operator.attrgetter('total_read'), reverse=True) self._fill_usage_result_table(period_data, input_list, self._append_per_proc_read_usage_row, result_table) def _fill_per_process_write_usage_result_table(self, period_data, result_table): input_list = sorted(period_data.tids.values(), key=operator.attrgetter('total_write'), reverse=True) self._fill_usage_result_table(period_data, input_list, self._append_per_proc_write_usage_row, result_table) def _fill_per_process_block_read_usage_result_table(self, period_data, result_table): input_list = sorted(period_data.tids.values(), key=operator.attrgetter('block_io.read'), reverse=True) self._fill_usage_result_table( period_data, input_list, self._append_per_proc_block_read_usage_row, result_table) def _fill_per_process_block_write_usage_result_table(self, period_data, result_table): input_list = sorted(period_data.tids.values(), key=operator.attrgetter('block_io.write'), reverse=True) self._fill_usage_result_table( period_data, input_list, self._append_per_proc_block_write_usage_row, result_table) def _fill_disk_sector_usage_result_table(self, period_data, result_table): input_list = sorted(period_data.disks.values(), key=operator.attrgetter('total_rq_sectors'), reverse=True) self._fill_usage_result_table(period_data, input_list, self._append_disk_sector_usage_row, result_table) def _fill_disk_request_usage_result_table(self, period_data, result_table): input_list = sorted(period_data.disks.values(), key=operator.attrgetter('rq_count'), reverse=True) self._fill_usage_result_table(period_data, input_list, self._append_disk_request_usage_row, result_table) def _fill_disk_rtps_usage_result_table(self, period_data, result_table): input_list = period_data.disks.values() self._fill_usage_result_table(period_data, input_list, self._append_disk_rtps_usage_row, result_table) def _fill_netif_recv_usage_result_table(self, period_data, result_table): input_list = sorted(period_data.ifaces.values(), key=operator.attrgetter('recv_bytes'), reverse=True) self._fill_usage_result_table(period_data, input_list, self._append_netif_recv_usage_row, result_table) def _fill_netif_send_usage_result_table(self, period_data, result_table): input_list = sorted(period_data.ifaces.values(), key=operator.attrgetter('sent_bytes'), reverse=True) self._fill_usage_result_table(period_data, input_list, self._append_netif_send_usage_row, result_table) def _fill_file_read_usage_result_table(self, period_data, files, result_table): input_list = sorted(files.values(), key=lambda file_stats: file_stats.io.read, reverse=True) self._fill_usage_result_table(period_data, input_list, self._append_file_read_usage_row, result_table) def _fill_file_write_usage_result_table(self, period_data, files, result_table): input_list = sorted(files.values(), key=lambda file_stats: file_stats.io.write, reverse=True) self._fill_usage_result_table(period_data, input_list, self._append_file_write_usage_row, result_table) def _fill_file_usage_result_tables(self, period_data, read_table, write_table): files = self._analysis.get_files_stats(period_data) self._fill_file_read_usage_result_table(period_data, files, read_table) self._fill_file_write_usage_result_table(period_data, files, write_table) def _get_usage_result_tables(self, period_data, begin, end): # create result tables per_proc_read_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PROCESS_TOP, begin, end, 'read') per_proc_write_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PROCESS_TOP, begin, end, 'written') per_file_read_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_FILE_TOP, begin, end, 'read') per_file_write_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_FILE_TOP, begin, end, 'written') per_proc_block_read_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PROCESS_TOP_BLOCK, begin, end, 'read') per_proc_block_write_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_PROCESS_TOP_BLOCK, begin, end, 'written') per_disk_sector_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_DISK_TOP_SECTOR, begin, end) per_disk_request_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_DISK_TOP_REQUEST, begin, end) per_disk_rtps_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_DISK_TOP_RTPS, begin, end) per_netif_recv_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_NETIF_TOP, begin, end, 'received') per_netif_send_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PER_NETIF_TOP, begin, end, 'sent') # fill result tables self._fill_per_process_read_usage_result_table(period_data, per_proc_read_table) self._fill_per_process_write_usage_result_table(period_data, per_proc_write_table) self._fill_file_usage_result_tables(period_data, per_file_read_table, per_file_write_table) self._fill_per_process_block_read_usage_result_table( period_data, per_proc_block_read_table) self._fill_per_process_block_write_usage_result_table( period_data, per_proc_block_write_table) self._fill_disk_sector_usage_result_table(period_data, per_disk_sector_table) self._fill_disk_request_usage_result_table(period_data, per_disk_request_table) self._fill_disk_rtps_usage_result_table(period_data, per_disk_rtps_table) self._fill_netif_recv_usage_result_table(period_data, per_netif_recv_table) self._fill_netif_send_usage_result_table(period_data, per_netif_send_table) return _UsageTables( per_proc_read=per_proc_read_table, per_proc_write=per_proc_write_table, per_file_read=per_file_read_table, per_file_write=per_file_write_table, per_proc_block_read=per_proc_block_read_table, per_proc_block_write=per_proc_block_write_table, per_disk_sector=per_disk_sector_table, per_disk_request=per_disk_request_table, per_disk_rtps=per_disk_rtps_table, per_netif_recv=per_netif_recv_table, per_netif_send=per_netif_send_table, ) def _print_per_proc_io(self, result_table, title): header_format = '{:<25} {:<10} {:<10} {:<10}' label_header = header_format.format( 'Process', 'Disk', 'Net', 'Unknown' ) def get_label(row): label_format = '{:<25} {:>10} {:>10} {:>10}' if row.process.pid is None: pid_str = 'unknown (tid=%d)' % (row.process.tid) else: pid_str = str(row.process.pid) label = label_format.format( '%s (%s)' % (row.process.name, pid_str), format_utils.format_size(row.disk_size.value), format_utils.format_size(row.net_size.value), format_utils.format_size(row.unknown_size.value) ) return label graph = termgraph.BarGraph( title='Per-process I/O ' + title, label_header=label_header, get_value=lambda row: row.size.value, get_value_str=format_utils.format_size, get_label=get_label, data=result_table.rows ) graph.print_graph() def _print_per_proc_block_io(self, result_table, title): def get_label(row): proc_name = row.process.name if not proc_name: proc_name = 'unknown' if row.process.pid is None: pid_str = 'unknown (tid={})'.format(row.process.tid) else: pid_str = str(row.process.pid) return '{} (pid={})'.format(proc_name, pid_str) graph = termgraph.BarGraph( title='Block I/O ' + title, label_header='Process', get_value=lambda row: row.size.value, get_value_str=format_utils.format_size, get_label=get_label, data=result_table.rows ) graph.print_graph() def _print_per_disk_sector(self, result_table): graph = termgraph.BarGraph( title='Disk Requests Sector Count', label_header='Disk', unit='sectors', get_value=lambda row: row.count.value, get_label=lambda row: row.disk.name, data=result_table.rows ) graph.print_graph() def _print_per_disk_request(self, result_table): graph = termgraph.BarGraph( title='Disk Request Count', label_header='Disk', unit='requests', get_value=lambda row: row.count.value, get_label=lambda row: row.disk.name, data=result_table.rows ) graph.print_graph() def _print_per_disk_rtps(self, result_table): graph = termgraph.BarGraph( title='Disk Request Average Latency', label_header='Disk', unit='ms', get_value=lambda row: row.rtps.value / 1000000, get_label=lambda row: row.disk.name, data=result_table.rows ) graph.print_graph() def _print_per_netif_io(self, result_table, title): graph = termgraph.BarGraph( title='Network ' + title + ' Bytes', label_header='Interface', get_value=lambda row: row.size.value, get_value_str=format_utils.format_size, get_label=lambda row: row.netif.name, data=result_table.rows ) graph.print_graph() def _print_per_file_io(self, result_table, title): # FIXME add option to show FD owners # FIXME why are read and write values the same? graph = termgraph.BarGraph( title='Per-file I/O ' + title, label_header='Path', get_value=lambda row: row.size.value, get_value_str=format_utils.format_size, get_label=lambda row: row.path.path, data=result_table.rows ) graph.print_graph() def _print_usage(self, usage_tables): self._print_per_proc_io(usage_tables.per_proc_read, 'Read') self._print_per_proc_io(usage_tables.per_proc_write, 'Write') self._print_per_file_io(usage_tables.per_file_read, 'Read') self._print_per_file_io(usage_tables.per_file_write, 'Write') self._print_per_proc_block_io(usage_tables.per_proc_block_read, 'Read') self._print_per_proc_block_io( usage_tables.per_proc_block_write, 'Write' ) self._print_per_disk_sector(usage_tables.per_disk_sector) self._print_per_disk_request(usage_tables.per_disk_request) self._print_per_disk_rtps(usage_tables.per_disk_rtps) self._print_per_netif_io(usage_tables.per_netif_recv, 'Received') self._print_per_netif_io(usage_tables.per_netif_send, 'Sent') def _fill_freq_result_table(self, duration_list, result_table): if not duration_list: return # The number of bins for the histogram resolution = self._args.freq_resolution min_duration = min(duration_list) max_duration = max(duration_list) # ns to µs min_duration /= 1000 max_duration /= 1000 step = (max_duration - min_duration) / resolution if step == 0: return buckets = [] values = [] for i in range(resolution): buckets.append(i * step) values.append(0) for duration in duration_list: duration /= 1000 index = min(int((duration - min_duration) / step), resolution - 1) values[index] += 1 for index, value in enumerate(values): result_table.append_row( latency_lower=mi.Duration.from_us(index * step + min_duration), latency_upper=mi.Duration.from_us((index + 1) * step + min_duration), count=mi.Number(value), ) def _get_disk_freq_result_tables(self, period_data, begin, end): result_tables = [] for disk in period_data.disks.values(): rq_durations = [rq.duration for rq in disk.rq_list if self._filter_io_request(rq)] subtitle = 'disk: {}'.format(disk.diskname) result_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin, end, subtitle) self._fill_freq_result_table(rq_durations, result_table) result_tables.append(result_table) return result_tables def _get_syscall_freq_result_tables(self, period_data, begin, end): open_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin, end, 'open') read_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin, end, 'read') write_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin, end, 'write') sync_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_FREQ, begin, end, 'sync') self._fill_freq_result_table( [io_rq.duration for io_rq in self._analysis.open_io_requests(period_data) if self._filter_io_request(io_rq)], open_table) self._fill_freq_result_table( [io_rq.duration for io_rq in self._analysis.read_io_requests(period_data) if self._filter_io_request(io_rq)], read_table) self._fill_freq_result_table( [io_rq.duration for io_rq in self._analysis.write_io_requests(period_data) if self._filter_io_request(io_rq)], write_table) self._fill_freq_result_table( [io_rq.duration for io_rq in self._analysis.sync_io_requests(period_data) if self._filter_io_request(io_rq)], sync_table) return [open_table, read_table, write_table, sync_table] def _get_freq_result_tables(self, period_data, begin, end): syscall_tables = self._get_syscall_freq_result_tables(period_data, begin, end) disk_tables = self._get_disk_freq_result_tables(period_data, begin, end) return syscall_tables + disk_tables def _print_one_freq(self, result_table): graph = termgraph.FreqGraph( data=result_table.rows, get_value=lambda row: row.count.value, get_lower_bound=lambda row: row.latency_lower.to_us(), title='{} {}'.format(result_table.title, result_table.subtitle), unit='µs' ) graph.print_graph() def _print_freq(self, freq_tables): for freq_table in freq_tables: self._print_one_freq(freq_table) def _append_log_row(self, period_data, io_rq, result_table): if io_rq.size is None: size = mi.Empty() else: size = mi.Size(io_rq.size) tid = io_rq.tid proc_stats = period_data.tids[tid] proc_name = proc_stats.comm # TODO: handle fd_in/fd_out for RW type operations if io_rq.fd is None: path = mi.Empty() fd = mi.Empty() else: fd = mi.Fd(io_rq.fd) parent_proc = proc_stats if parent_proc.pid is not None: parent_proc = period_data.tids[parent_proc.pid] fd_stats = parent_proc.get_fd(io_rq.fd, io_rq.end_ts) if fd_stats is not None: path = mi.Path(fd_stats.filename) else: path = mi.Unknown() result_table.append_row( time_range=mi.TimeRange(io_rq.begin_ts, io_rq.end_ts), out_of_range=mi.Boolean(self._is_io_rq_out_of_range(io_rq)), duration=mi.Duration(io_rq.duration), syscall=mi.Syscall(io_rq.syscall_name), size=size, process=mi.Process(proc_name, tid=tid), path=path, fd=fd, ) def _fill_log_result_table(self, period_data, rq_list, sort_key, is_top, result_table): if not rq_list: return count = 0 for io_rq in sorted(rq_list, key=operator.attrgetter(sort_key), reverse=is_top): if is_top and count > self._args.limit: break self._append_log_row(period_data, io_rq, result_table) count += 1 def _fill_log_result_table_from_io_requests(self, period_data, io_requests, sort_key, is_top, result_table): io_requests = [io_rq for io_rq in io_requests if self._filter_io_request(io_rq)] self._fill_log_result_table(period_data, io_requests, sort_key, is_top, result_table) def _get_top_result_tables(self, period_data, begin, end): open_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_TOP_SYSCALL, begin, end, 'open') read_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_TOP_SYSCALL, begin, end, 'read') write_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_TOP_SYSCALL, begin, end, 'write') sync_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_TOP_SYSCALL, begin, end, 'sync') self._fill_log_result_table_from_io_requests( period_data, self._analysis.open_io_requests(period_data), 'duration', True, open_table) self._fill_log_result_table_from_io_requests( period_data, self._analysis.read_io_requests(period_data), 'duration', True, read_table) self._fill_log_result_table_from_io_requests( period_data, self._analysis.write_io_requests(period_data), 'duration', True, write_table) self._fill_log_result_table_from_io_requests( period_data, self._analysis.sync_io_requests(period_data), 'duration', True, sync_table) return [open_table, read_table, write_table, sync_table] def _print_log_row(self, row): fmt = '{:<40} {:<16} {:>16} {:>11} {:<24} {:<8} {:<14}' time_range_str = format_utils.format_time_range( row.time_range.begin.value, row.time_range.end.value, self._args.multi_day, self._args.gmt ) duration_str = '%0.03f' % row.duration.to_us() if type(row.size) is mi.Empty: size = 'N/A' else: size = format_utils.format_size(row.size.value) tid = row.process.tid proc_name = row.process.name if type(row.fd) is mi.Empty: file_str = 'N/A' else: if type(row.path) is mi.Unknown: path = 'unknown' else: path = row.path.path file_str = '%s (fd=%s)' % (path, row.fd.fd) if row.out_of_range.value: time_range_str += '*' duration_str += '*' else: time_range_str += ' ' duration_str += ' ' print(fmt.format(time_range_str, row.syscall.name, duration_str, size, proc_name, tid, file_str)) def _print_log(self, result_table): if not result_table.rows: return has_out_of_range_rq = False print() fmt = '{} {} (usec)' print(fmt.format(result_table.title, result_table.subtitle)) header_fmt = '{:<20} {:<20} {:<16} {:<23} {:<5} {:<24} {:<8} {:<14}' print(header_fmt.format( 'Begin', 'End', 'Name', 'Duration (usec)', 'Size', 'Proc', 'PID', 'Filename')) for row in result_table.rows: self._print_log_row(row) if not has_out_of_range_rq and row.out_of_range.value: has_out_of_range_rq = True if has_out_of_range_rq: print('*: Syscalls started and/or completed outside of the ' 'range specified') def _print_top(self, top_tables): for table in top_tables: self._print_log(table) def _get_log_result_table(self, period_data, begin, end): log_table = self._mi_create_result_table(self._MI_TABLE_CLASS_LOG, begin, end) self._fill_log_result_table_from_io_requests( period_data, self._analysis.io_requests(period_data), 'begin_ts', False, log_table) return log_table def _append_latency_stats_row(self, obj, rq_durations, result_table): rq_count = len(rq_durations) total_duration = sum(rq_durations) if len(rq_durations) > 0: min_duration = min(rq_durations) max_duration = max(rq_durations) else: min_duration = 0 max_duration = 0 if rq_count < 2: stdev = mi.Unknown() else: stdev = mi.Duration(statistics.stdev(rq_durations)) if rq_count > 0: avg = total_duration / rq_count else: avg = 0 result_table.append_row( obj=obj, count=mi.Number(rq_count), min_latency=mi.Duration(min_duration), avg_latency=mi.Duration(avg), max_latency=mi.Duration(max_duration), stdev_latency=stdev, ) def _append_latency_stats_row_from_requests(self, obj, io_requests, result_table): rq_durations = [io_rq.duration for io_rq in io_requests if self._filter_io_request(io_rq)] self._append_latency_stats_row(obj, rq_durations, result_table) def _get_syscall_latency_stats_result_table(self, period_data, begin, end): result_table = self._mi_create_result_table( self._MI_TABLE_CLASS_SYSCALL_LATENCY_STATS, begin, end) append_fn = self._append_latency_stats_row_from_requests append_fn(mi.String('Open'), self._analysis.open_io_requests(period_data), result_table) append_fn(mi.String('Read'), self._analysis.read_io_requests(period_data), result_table) append_fn(mi.String('Write'), self._analysis.write_io_requests(period_data), result_table) append_fn(mi.String('Sync'), self._analysis.sync_io_requests(period_data), result_table) return result_table def _get_disk_latency_stats_result_table(self, period_data, begin, end): if not period_data.disks: return result_table = self._mi_create_result_table( self._MI_TABLE_CLASS_PART_LATENCY_STATS, begin, end) for disk in period_data.disks.values(): if disk.rq_count: rq_durations = [rq.duration for rq in disk.rq_list if self._filter_io_request(rq)] disk = mi.Disk(disk.diskname) self._append_latency_stats_row(disk, rq_durations, result_table) return result_table def _get_latency_stats_result_tables(self, period_data, begin, end): syscall_tbl = self._get_syscall_latency_stats_result_table(period_data, begin, end) disk_tbl = self._get_disk_latency_stats_result_table(period_data, begin, end) return syscall_tbl, disk_tbl def _print_latency_stats_row(self, row): if type(row.stdev_latency) is mi.Unknown: stdev = '?' else: stdev = '%0.03f' % row.stdev_latency.to_us() avg = '%0.03f' % row.avg_latency.to_us() min_duration = '%0.03f' % row.min_latency.to_us() max_duration = '%0.03f' % row.max_latency.to_us() print(IoAnalysisCommand._LATENCY_STATS_FORMAT.format( str(row.obj), row.count.value, min_duration, avg, max_duration, stdev)) def _print_syscall_latency_stats(self, stats_table): print('\nSyscalls latency statistics (usec):') print(IoAnalysisCommand._LATENCY_STATS_FORMAT.format( 'Type', 'Count', 'Min', 'Average', 'Max', 'Stdev')) print(IoAnalysisCommand._SECTION_SEPARATOR_STRING) for row in stats_table.rows: self._print_latency_stats_row(row) def _print_disk_latency_stats(self, stats_table): if not stats_table or not stats_table.rows: return print('\nDisk latency statistics (usec):') print(IoAnalysisCommand._LATENCY_STATS_FORMAT.format( 'Name', 'Count', 'Min', 'Average', 'Max', 'Stdev')) print(IoAnalysisCommand._SECTION_SEPARATOR_STRING) for row in stats_table.rows: self._print_latency_stats_row(row) def _print_latency_stats(self, syscall_latency_stats_table, disk_latency_stats_table): self._print_syscall_latency_stats(syscall_latency_stats_table) self._print_disk_latency_stats(disk_latency_stats_table) def _add_arguments(self, ap): Command._add_min_max_args(ap) Command._add_log_args( ap, help='Output the I/O requests in chronological order') Command._add_top_args( ap, help='Output the top I/O latencies by category') Command._add_stats_args(ap, help='Output the I/O latency statistics') Command._add_freq_args( ap, help='Output the I/O latency frequency distribution') ap.add_argument('--usage', action='store_true', help='Output the I/O usage') ap.add_argument('--minsize', type=float, help='Filter out, I/O operations working with ' 'less that minsize bytes') ap.add_argument('--maxsize', type=float, help='Filter out, I/O operations working with ' 'more that maxsize bytes') def _run(mi_mode): iocmd = IoAnalysisCommand(mi_mode=mi_mode) iocmd.run() def _runstats(mi_mode): sys.argv.insert(1, '--stats') _run(mi_mode) def _runlog(mi_mode): sys.argv.insert(1, '--log') _run(mi_mode) def _runfreq(mi_mode): sys.argv.insert(1, '--freq') _run(mi_mode) def _runlatencytop(mi_mode): sys.argv.insert(1, '--top') _run(mi_mode) def _runusage(mi_mode): sys.argv.insert(1, '--usage') _run(mi_mode) def runstats(): _runstats(mi_mode=False) def runlog(): _runlog(mi_mode=False) def runfreq(): _runfreq(mi_mode=False) def runlatencytop(): _runlatencytop(mi_mode=False) def runusage(): _runusage(mi_mode=False) def runstats_mi(): _runstats(mi_mode=True) def runlog_mi(): _runlog(mi_mode=True) def runfreq_mi(): _runfreq(mi_mode=True) def runlatencytop_mi(): _runlatencytop(mi_mode=True) def runusage_mi(): _runusage(mi_mode=True) lttnganalyses-0.6.1/lttnganalyses/cli/memtop.py0000664000175000017500000001735212745737273023421 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # 2015 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import operator from .command import Command from ..core import memtop from . import mi from . import termgraph class Memtop(Command): _DESC = """The memtop command.""" _ANALYSIS_CLASS = memtop.Memtop _MI_TITLE = 'Top memory usage' _MI_DESCRIPTION = 'Per-TID top allocated/freed memory' _MI_TAGS = [mi.Tags.MEMORY, mi.Tags.TOP] _MI_TABLE_CLASS_ALLOCD = 'allocd' _MI_TABLE_CLASS_FREED = 'freed' _MI_TABLE_CLASS_TOTAL = 'total' _MI_TABLE_CLASS_SUMMARY = 'summary' _MI_TABLE_CLASSES = [ ( _MI_TABLE_CLASS_ALLOCD, 'Per-TID top allocated memory', [ ('process', 'Process', mi.Process), ('pages', 'Allocated pages', mi.Number, 'pages'), ] ), ( _MI_TABLE_CLASS_FREED, 'Per-TID top freed memory', [ ('process', 'Process', mi.Process), ('pages', 'Freed pages', mi.Number, 'pages'), ] ), ( _MI_TABLE_CLASS_TOTAL, 'Total allocated/freed memory', [ ('allocd', 'Total allocated pages', mi.Number, 'pages'), ('freed', 'Total freed pages', mi.Number, 'pages'), ] ), ( _MI_TABLE_CLASS_SUMMARY, 'Memory usage - summary', [ ('time_range', 'Time range', mi.TimeRange), ('allocd', 'Total allocated pages', mi.Number, 'pages'), ('freed', 'Total freed pages', mi.Number, 'pages'), ] ), ] def _analysis_tick(self, period_data, end_ns): if period_data is None: return begin_ns = period_data.period.begin_evt.timestamp allocd_table = self._get_per_tid_allocd_result_table(period_data, begin_ns, end_ns) freed_table = self._get_per_tid_freed_result_table(period_data, begin_ns, end_ns) total_table = self._get_total_result_table(period_data, begin_ns, end_ns) if self._mi_mode: self._mi_append_result_table(allocd_table) self._mi_append_result_table(freed_table) self._mi_append_result_table(total_table) else: self._print_date(begin_ns, end_ns) self._print_per_tid_allocd(allocd_table) self._print_per_tid_freed(freed_table) self._print_total(total_table) def _create_summary_result_tables(self): total_tables = self._mi_get_result_tables(self._MI_TABLE_CLASS_TOTAL) begin = total_tables[0].timerange.begin.value end = total_tables[-1].timerange.end.value summary_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_SUMMARY, begin, end) for total_table in total_tables: total_allocd = total_table.rows[0].allocd total_freed = total_table.rows[0].freed summary_table.append_row( time_range=total_table.timerange, allocd=total_allocd, freed=total_freed, ) self._mi_clear_result_tables() self._mi_append_result_table(summary_table) def _get_per_tid_attr_result_table(self, period_data, table_class, attr, begin_ns, end_ns): result_table = self._mi_create_result_table(table_class, begin_ns, end_ns) count = 0 for tid in sorted(period_data.tids.values(), key=operator.attrgetter(attr), reverse=True): result_table.append_row( process=mi.Process(tid.comm, tid=tid.tid), pages=mi.Number(getattr(tid, attr)), ) count += 1 if self._args.limit > 0 and count >= self._args.limit: break return result_table def _get_per_tid_allocd_result_table(self, period_data, begin_ns, end_ns): return self._get_per_tid_attr_result_table(period_data, self._MI_TABLE_CLASS_ALLOCD, 'allocated_pages', begin_ns, end_ns) def _get_per_tid_freed_result_table(self, period_data, begin_ns, end_ns): return self._get_per_tid_attr_result_table(period_data, self._MI_TABLE_CLASS_FREED, 'freed_pages', begin_ns, end_ns) def _get_total_result_table(self, period_data, begin_ns, end_ns): result_table = self._mi_create_result_table(self._MI_TABLE_CLASS_TOTAL, begin_ns, end_ns) alloc = 0 freed = 0 for tid in period_data.tids.values(): alloc += tid.allocated_pages freed += tid.freed_pages result_table.append_row( allocd=mi.Number(alloc), freed=mi.Number(freed), ) return result_table def _print_per_tid_result(self, result_table, title): graph = termgraph.BarGraph( title=title, unit='pages', get_value=lambda row: row.pages.value, get_label=lambda row: '%s (%d)' % (row.process.name, row.process.tid), label_header='Process', data=result_table.rows ) graph.print_graph() def _print_per_tid_allocd(self, result_table): self._print_per_tid_result(result_table, 'Per-TID Memory Allocations') def _print_per_tid_freed(self, result_table): self._print_per_tid_result(result_table, 'Per-TID Memory Deallocations') def _print_total(self, result_table): alloc = result_table.rows[0].allocd.value freed = result_table.rows[0].freed.value print('\nTotal memory usage:\n- %d pages allocated\n- %d pages freed' % (alloc, freed)) def _add_arguments(self, ap): Command._add_proc_filter_args(ap) Command._add_top_args(ap) def _run(mi_mode): memtopcmd = Memtop(mi_mode=mi_mode) memtopcmd.run() # entry point (human) def run(): _run(mi_mode=False) # entry point (MI) def run_mi(): _run(mi_mode=True) lttnganalyses-0.6.1/lttnganalyses/cli/cputop.py0000664000175000017500000002002412745737273023420 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # 2015 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import operator from ..common import format_utils from .command import Command from ..core import cputop from . import mi from . import termgraph class Cputop(Command): _DESC = """The cputop command.""" _ANALYSIS_CLASS = cputop.Cputop _MI_TITLE = 'Top CPU usage' _MI_DESCRIPTION = 'Per-TID, per-CPU, and total top CPU usage' _MI_TAGS = [mi.Tags.CPU, mi.Tags.TOP] _MI_TABLE_CLASS_PER_PROC = 'per-process' _MI_TABLE_CLASS_PER_CPU = 'per-cpu' _MI_TABLE_CLASS_TOTAL = 'total' _MI_TABLE_CLASS_SUMMARY = 'summary' _MI_TABLE_CLASSES = [ ( _MI_TABLE_CLASS_PER_PROC, 'Per-TID top CPU usage', [ ('process', 'Process', mi.Process), ('migrations', 'Migration count', mi.Number, 'migrations'), ('prio_list', 'Chronological priorities', mi.String), ('usage', 'CPU usage', mi.Ratio), ] ), ( _MI_TABLE_CLASS_PER_CPU, 'Per-CPU top CPU usage', [ ('cpu', 'CPU', mi.Cpu), ('usage', 'CPU usage', mi.Ratio), ]), ( _MI_TABLE_CLASS_TOTAL, 'Total CPU usage', [ ('usage', 'CPU usage', mi.Ratio), ] ), ( _MI_TABLE_CLASS_SUMMARY, 'CPU usage - summary', [ ('time_range', 'Time range', mi.TimeRange), ('usage', 'Total CPU usage', mi.Ratio), ] ), ] def _analysis_tick(self, period_data, end_ns): if period_data is None: return begin_ns = period_data.period.begin_evt.timestamp per_tid_table = self._get_per_tid_usage_result_table(period_data, begin_ns, end_ns) per_cpu_table = self._get_per_cpu_usage_result_table(period_data, begin_ns, end_ns) total_table = self._get_total_usage_result_table(period_data, begin_ns, end_ns) if self._mi_mode: self._mi_append_result_table(per_tid_table) self._mi_append_result_table(per_cpu_table) self._mi_append_result_table(total_table) else: self._print_date(begin_ns, end_ns) self._print_per_tid_usage(per_tid_table) self._print_per_cpu_usage(per_cpu_table) if total_table: self._print_total_cpu_usage(total_table) def _create_summary_result_tables(self): total_tables = self._mi_get_result_tables(self._MI_TABLE_CLASS_TOTAL) begin = total_tables[0].timerange.begin.value end = total_tables[-1].timerange.end.value summary_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_SUMMARY, begin, end) for total_table in total_tables: usage = total_table.rows[0].usage summary_table.append_row( time_range=total_table.timerange, usage=usage, ) self._mi_clear_result_tables() self._mi_append_result_table(summary_table) def _get_per_tid_usage_result_table(self, period_data, begin_ns, end_ns): result_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_PER_PROC, begin_ns, end_ns) count = 0 for tid in sorted(period_data.tids.values(), key=operator.attrgetter('usage_percent'), reverse=True): prio_list = format_utils.format_prio_list(tid.prio_list) result_table.append_row( process=mi.Process(tid.comm, tid=tid.tid), migrations=mi.Number(tid.migrate_count), prio_list=mi.String(prio_list), usage=mi.Ratio.from_percentage(tid.usage_percent) ) count += 1 if self._args.limit > 0 and count >= self._args.limit: break return result_table def _get_per_cpu_usage_result_table(self, period_data, begin_ns, end_ns): result_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_PER_CPU, begin_ns, end_ns) for cpu in sorted(period_data.cpus.values(), key=operator.attrgetter('cpu_id')): result_table.append_row( cpu=mi.Cpu(cpu.cpu_id), usage=mi.Ratio.from_percentage(cpu.usage_percent) ) return result_table def _get_total_usage_result_table(self, period_data, begin_ns, end_ns): result_table = \ self._mi_create_result_table(self._MI_TABLE_CLASS_TOTAL, begin_ns, end_ns) cpu_count = len(self.state.cpus) usage_percent = 0 if not cpu_count: return for cpu in sorted(period_data.cpus.values(), key=operator.attrgetter('usage_percent'), reverse=True): usage_percent += cpu.usage_percent # average per CPU usage_percent /= cpu_count result_table.append_row( usage=mi.Ratio.from_percentage(usage_percent), ) return result_table def _print_per_tid_usage(self, result_table): row_format = ' {:<25} {:>10} {}' label_header = row_format.format('Process', 'Migrations', 'Priorities') def format_label(row): return row_format.format( '%s (%d)' % (row.process.name, row.process.tid), row.migrations.value, row.prio_list.value, ) graph = termgraph.BarGraph( title='Per-TID Usage', unit='%', get_value=lambda row: row.usage.to_percentage(), get_label=format_label, label_header=label_header, data=result_table.rows ) graph.print_graph() def _print_per_cpu_usage(self, result_table): graph = termgraph.BarGraph( title='Per-CPU Usage', unit='%', get_value=lambda row: row.usage.to_percentage(), get_label=lambda row: 'CPU %d' % row.cpu.id, data=result_table.rows ) graph.print_graph() def _print_total_cpu_usage(self, result_table): usage_percent = result_table.rows[0].usage.to_percentage() print('\nTotal CPU Usage: %0.02f%%\n' % usage_percent) def _add_arguments(self, ap): Command._add_proc_filter_args(ap) Command._add_top_args(ap) def _run(mi_mode): cputopcmd = Cputop(mi_mode=mi_mode) cputopcmd.run() def run(): _run(mi_mode=False) def run_mi(): _run(mi_mode=True) lttnganalyses-0.6.1/lttnganalyses/cli/period_parsing.py0000664000175000017500000003064513033475105025105 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Philippe Proulx # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import pyparsing as pp from ..core import period class MalformedExpression(Exception): pass class DuplicatePeriodCapture(Exception): def __init__(self, name): self._name = name def __str__(self): return 'Duplicate period capture name: "{}"'.format(self._name) # common grammar elements _e = pp.CaselessLiteral('e') _number = (pp.Combine(pp.Word('+-' + pp.nums, pp.nums) + pp.Optional('.' + pp.Optional(pp.Word(pp.nums))) + pp.Optional(_e + pp.Word('+-' + pp.nums, pp.nums))) .setResultsName('number')) _quoted_string = pp.QuotedString('"', '\\').setResultsName('quoted-string') _identifier = pp.Word(pp.alphas + '_', pp.alphanums + '_').setResultsName('id') _tph_scope_prefix = (pp.Literal(period.DynScope.TPH.value) .setResultsName('tph-scope-prefix')) _spc_scope_prefix = (pp.Literal(period.DynScope.SPC.value) .setResultsName('spc-scope-prefix')) _seh_scope_prefix = (pp.Literal(period.DynScope.SEH.value) .setResultsName('seh-scope-prefix')) _sec_scope_prefix = (pp.Literal(period.DynScope.SEC.value) .setResultsName('sec-scope-prefix')) _ec_scope_prefix = (pp.Literal(period.DynScope.EC.value) .setResultsName('ec-scope-prefix')) _ep_scope_prefix = (pp.Literal(period.DynScope.EP.value) .setResultsName('ep-scope-prefix')) _dyn_scope_prefix = pp.Group(pp.Group(_tph_scope_prefix | _spc_scope_prefix | _seh_scope_prefix | _sec_scope_prefix | _ec_scope_prefix | _ep_scope_prefix) + '.').setResultsName('dyn-scope-prefix') _parent_scope_prefix = (pp.Group(pp.Literal('$parent') + '.') .setResultsName('parent-scope-prefix')) _begin_scope_prefix = (pp.Group(pp.Literal('$begin') + '.') .setResultsName('begin-scope-prefix')) _event_scope_prefix = (pp.Group(pp.Literal('$evt') + '.') .setResultsName('event-scope-prefix')) _event_field = pp.Group(pp.Optional(_parent_scope_prefix) + pp.Optional(_begin_scope_prefix) + _event_scope_prefix + pp.Optional(_dyn_scope_prefix) + _identifier).setResultsName('event-field') _event_name = pp.Group(pp.Optional(_parent_scope_prefix) + pp.Optional(_begin_scope_prefix) + _event_scope_prefix + '$name').setResultsName('event-name') _relop = (pp.Group(pp.Literal('==') | '!=' | '<=' | '>=' | '<' | '>') .setResultsName('relop')) _eqop = pp.Group(pp.Literal('=*') | '==' | '!=').setResultsName('eqop') _name_comp_expr = pp.Group(_event_name + _eqop + _quoted_string).setResultsName('name-comp-expr') _number_comp_expr = pp.Group(_event_field + _relop + _number).setResultsName('number-comp-expr') _string_comp_expr = pp.Group(_event_field + _eqop + _quoted_string).setResultsName('string-comp-expr') _field_comp_expr = (pp.Group(_event_field.setResultsName('lh') + _relop + _event_field.setResultsName('rh')) .setResultsName('field-comp-expr')) _comp_expr = (_name_comp_expr | _number_comp_expr | _string_comp_expr | _field_comp_expr) _not_op = pp.Literal('!').setResultsName('notop') _and_op = pp.Literal('&&').setResultsName('andop') _or_op = pp.Literal('||').setResultsName('orop') _expr = pp.infixNotation(_comp_expr, [ (_not_op, 1, pp.opAssoc.RIGHT), (_and_op, 2, pp.opAssoc.LEFT), (_or_op, 2, pp.opAssoc.LEFT) ]).setResultsName('expr') # period definition grammar elements _parent_name = pp.Literal('(') + _identifier + ')' _period_info = (pp.Group(_identifier.setResultsName('name') + (pp.Optional(_parent_name) .setResultsName('parent-name'))) .setResultsName('period-info')) _period_def = (pp.Optional(_period_info) + ':' + _expr.setResultsName('begin-expr') + pp.Optional(pp.Literal(':') + _expr.setResultsName('end-expr'))) # period capture grammar elements _capture_ref = (pp.Group(pp.Optional(_identifier + '=').setResultsName('var') + (_event_name | _event_field)) .setResultsName('capture-ref')) _capture_refs = pp.delimitedList(_capture_ref, ',') _captures_def = (_identifier.setResultsName('name') + ':' + pp.Optional(_capture_refs.setResultsName('begin-exprs')) + pp.Optional(pp.Literal(':') + _capture_refs.setResultsName('end-exprs'))) # operator string -> function which creates an expression _OP_TO_EXPR = { '=*': lambda lh, rh: period.GlobEq(lh, rh), '==': lambda lh, rh: period.Eq(lh, rh), '!=': lambda lh, rh: period.LogicalNot(period.Eq(lh, rh)), '<': lambda lh, rh: period.Lt(lh, rh), '<=': lambda lh, rh: period.LtEq(lh, rh), '>': lambda lh, rh: period.Gt(lh, rh), '>=': lambda lh, rh: period.GtEq(lh, rh), } def _res_to_scope(res): if res[-1] == '$name': scope = period.EventName() elif 'id' in res: scope = period.EventFieldName(res['id']) else: assert(False) if 'dyn-scope-prefix' in res: dyn_scope = period.DynScope(res['dyn-scope-prefix'][0][0]) scope = period.DynamicScope(dyn_scope, scope) scope = period.EventScope(scope) if 'begin-scope-prefix' in res: scope = period.BeginScope(scope) if 'parent-scope-prefix' in res: scope = period.ParentScope(scope) return scope def _res_quoted_string_to_string_expression(res_quoted_string): return period.String(str(res_quoted_string)) def _res_number_to_number_expression(res_number): return period.Number(float(str(res_number))) def _create_binary_op(relop, lh, rh): return _OP_TO_EXPR[relop[0]](lh, rh) def _extract_exprs(res): exprs = [] for res_child in res: if res_child not in ('&&', '||'): expr = _expr_results_to_expression(res_child) exprs.append(expr) return exprs def _expr_results_to_expression(res_expr): # check for logical op if 'notop' in res_expr: expr = _expr_results_to_expression(res_expr[1]) return period.LogicalNot(expr) if 'andop' in res_expr: exprs = _extract_exprs(res_expr) return period.create_conjunction_from_exprs(exprs) if 'orop' in res_expr: exprs = _extract_exprs(res_expr) return period.create_disjunction_from_exprs(exprs) res_expr_name = res_expr.getName() if res_expr_name == 'name-comp-expr': ev_name_expr = _res_to_scope(res_expr['event-name']) qstring = res_expr['quoted-string'] str_expr = _res_quoted_string_to_string_expression(qstring) return _create_binary_op(res_expr['eqop'], ev_name_expr, str_expr) if res_expr_name == 'number-comp-expr': relop = res_expr['relop'] field_expr = _res_to_scope(res_expr['event-field']) number_expr = _res_number_to_number_expression(res_expr['number']) return _create_binary_op(relop, field_expr, number_expr) if res_expr_name == 'string-comp-expr': field_expr = _res_to_scope(res_expr['event-field']) qstring = res_expr['quoted-string'] str_expr = _res_quoted_string_to_string_expression(qstring) return _create_binary_op(res_expr['eqop'], field_expr, str_expr) if res_expr_name == 'field-comp-expr': lh_field_expr = _res_to_scope(res_expr['lh']) rh_field_expr = _res_to_scope(res_expr['rh']) return _create_binary_op(res_expr['relop'], lh_field_expr, rh_field_expr) assert(False) def _capture_refs_results_to_captures_exprs(res_capture_refs): captures_exprs = {} for res_capture_ref in res_capture_refs: name = None if 'var' in res_capture_ref: name = res_capture_ref['var'][0] expr = _res_to_scope(res_capture_ref[-1]) if name is None: name = str(expr) if name in captures_exprs: raise DuplicatePeriodCapture(name) captures_exprs[name] = expr return captures_exprs class PeriodDefArgParseResults: def __init__(self, parent_name, period_name, begin_expr, end_expr): self._parent_name = parent_name self._period_name = period_name self._begin_expr = begin_expr self._end_expr = end_expr @property def parent_name(self): return self._parent_name @property def period_name(self): return self._period_name @property def begin_expr(self): return self._begin_expr @property def end_expr(self): return self._end_expr class PeriodCapturesDefArgResults: def __init__(self, name, begin_captures_exprs, end_captures_exprs): self._name = name self._begin_captures_exprs = begin_captures_exprs self._end_captures_exprs = end_captures_exprs @property def name(self): return self._name @property def begin_captures_exprs(self): return self._begin_captures_exprs @property def end_captures_exprs(self): return self._end_captures_exprs def parse_period_def_arg(arg): try: period_def_res = _period_def.parseString(arg.split('/')[-1], parseAll=True) except Exception: raise MalformedExpression(arg) period_name = None parent_name = None if 'period-info' in period_def_res: period_info_res = period_def_res['period-info'] period_name = period_info_res['name'] if 'parent-name' in period_info_res: parent_name = period_info_res['parent-name']['id'] begin_expr = _expr_results_to_expression(period_def_res['begin-expr']) if 'end-expr' in period_def_res: end_expr = _expr_results_to_expression(period_def_res['end-expr']) else: end_expr = begin_expr return PeriodDefArgParseResults(parent_name, period_name, begin_expr, end_expr) def parse_period_captures_arg(arg): try: period_captures_res = _captures_def.parseString(arg.split('/')[-1], parseAll=True) except MalformedExpression: raise except Exception: raise MalformedExpression(arg) if 'begin-exprs' in period_captures_res: begin_captures_exprs = _capture_refs_results_to_captures_exprs( period_captures_res['begin-exprs']) else: begin_captures_exprs = {} if 'end-exprs' in period_captures_res: end_captures_exprs = _capture_refs_results_to_captures_exprs( period_captures_res['end-exprs']) else: end_captures_exprs = {} return PeriodCapturesDefArgResults(period_captures_res['name'], begin_captures_exprs, end_captures_exprs) lttnganalyses-0.6.1/lttnganalyses/cli/progressbar.py0000664000175000017500000001206212726625546024440 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import os import sys import time from . import mi from ..common import format_utils try: from progressbar import ETA, Bar, Percentage, ProgressBar progressbar_available = True except ImportError: progressbar_available = False # approximation for the progress bar _BYTES_PER_EVENT = 30 def get_folder_size(folder): total_size = os.path.getsize(folder) for item in os.listdir(folder): itempath = os.path.join(folder, item) if os.path.isfile(itempath): total_size += os.path.getsize(itempath) elif os.path.isdir(itempath): total_size += get_folder_size(itempath) return total_size class _Progress: def __init__(self, ts_begin, ts_end, path, use_size=False): if ts_begin is None or ts_end is None or use_size: size = get_folder_size(path) self._maxval = size / _BYTES_PER_EVENT self._use_time = False else: self._maxval = ts_end - ts_begin self._ts_begin = ts_begin self._ts_end = ts_end self._use_time = True self._at = 0 self._event_count = 0 self._last_event_count_check = 0 self._last_time_check = time.time() def update(self, event): self._event_count += 1 if self._use_time: self._at = event.timestamp - self._ts_begin else: self._at = self._event_count if self._at > self._maxval: self._at = self._maxval if self._event_count - self._last_event_count_check >= 101: self._last_event_count_check = self._event_count now = time.time() if now - self._last_time_check >= .1: self._update_progress() self._last_time_check = now def _update_progress(self): pass def finalize(self): pass class FancyProgressBar(_Progress): def __init__(self, ts_begin, ts_end, path, use_size): super().__init__(ts_begin, ts_end, path, use_size) self._pbar = None if progressbar_available: widgets = ['Processing the trace: ', Percentage(), ' ', Bar(marker='#', left='[', right=']'), ' ', ETA(), ' '] # see docs for other options self._pbar = ProgressBar(widgets=widgets, maxval=self._maxval) self._pbar.start() else: print('Warning: progressbar module not available, ' 'using --no-progress.', file=sys.stderr) def _update_progress(self): if self._pbar is None: return self._pbar.update(self._at) def finalize(self): if self._pbar is None: return self._pbar.finish() class MiProgress(_Progress): def __init__(self, ts_begin, ts_end, path, use_size): super().__init__(ts_begin, ts_end, path, use_size) if self._use_time: fmt = 'Starting analysis from {} to {}' begin = format_utils.format_timestamp(self._ts_begin) end = format_utils.format_timestamp(self._ts_end) msg = fmt.format(begin, end) else: msg = 'Starting analysis: {} estimated events'.format(round( self._maxval)) mi.print_progress(0, msg) def _update_progress(self): if self._at == self._maxval: mi.print_progress(1, 'Done!') return if self._use_time: ts_at = self._at + self._ts_begin at_ts = format_utils.format_timestamp(ts_at) end = format_utils.format_timestamp(self._ts_end) msg = '{}/{}; {} events processed'.format(at_ts, end, self._event_count) else: msg = '{} events processed'.format(self._event_count) mi.print_progress(round(self._at / self._maxval, 4), msg) def finalize(self): mi.print_progress(1, 'Done!') lttnganalyses-0.6.1/lttnganalyses/cli/__init__.py0000664000175000017500000000217512665072151023640 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/0000775000175000017500000000000013033742625024042 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/irq.py0000664000175000017500000001115712723101501025177 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import sp, sv class IrqStateProvider(sp.StateProvider): def __init__(self, state): cbs = { 'irq_handler_entry': self._process_irq_handler_entry, 'irq_handler_exit': self._process_irq_handler_exit, 'softirq_raise': self._process_softirq_raise, 'softirq_entry': self._process_softirq_entry, 'softirq_exit': self._process_softirq_exit } super().__init__(state, cbs) def _get_cpu(self, cpu_id): if cpu_id not in self._state.cpus: self._state.cpus[cpu_id] = sv.CPU(cpu_id) return self._state.cpus[cpu_id] # Hard IRQs def _process_irq_handler_entry(self, event): cpu = self._get_cpu(event['cpu_id']) irq = sv.HardIRQ.new_from_irq_handler_entry(event) cpu.current_hard_irq = irq self._state.send_notification_cb('irq_handler_entry', id=irq.id, irq_name=event['name']) def _process_irq_handler_exit(self, event): cpu = self._get_cpu(event['cpu_id']) if cpu.current_hard_irq is None or \ cpu.current_hard_irq.id != event['irq']: cpu.current_hard_irq = None return cpu.current_hard_irq.end_ts = event.timestamp cpu.current_hard_irq.ret = event['ret'] self._state.send_notification_cb('irq_handler_exit', hard_irq=cpu.current_hard_irq) cpu.current_hard_irq = None # SoftIRQs def _process_softirq_raise(self, event): cpu = self._get_cpu(event['cpu_id']) vec = event['vec'] if vec not in cpu.current_softirqs: cpu.current_softirqs[vec] = [] # Don't append a SoftIRQ object if one has already been raised, # because they are level-triggered. The only exception to this # is if the first SoftIRQ object already had a begin_ts which # means this raise was triggered after its entry, and will be # handled in the following softirq_entry if cpu.current_softirqs[vec] and \ cpu.current_softirqs[vec][0].begin_ts is None: return irq = sv.SoftIRQ.new_from_softirq_raise(event) cpu.current_softirqs[vec].append(irq) def _process_softirq_entry(self, event): cpu = self._get_cpu(event['cpu_id']) vec = event['vec'] if vec not in cpu.current_softirqs: cpu.current_softirqs[vec] = [] if cpu.current_softirqs[vec]: cpu.current_softirqs[vec][0].begin_ts = event.timestamp else: # SoftIRQ entry without a corresponding raise irq = sv.SoftIRQ.new_from_softirq_entry(event) cpu.current_softirqs[vec].append(irq) def _process_softirq_exit(self, event): cpu = self._get_cpu(event['cpu_id']) vec = event['vec'] # List of enqueued softirqs for the current cpu/vec # combination. None if vec is not found in the dictionary. current_softirqs = cpu.current_softirqs.get(vec) # Ignore the exit if either vec was not in the cpu's dict or # if its irq list was empty (i.e. no matching raise). if not current_softirqs: return current_softirqs[0].end_ts = event.timestamp self._state.send_notification_cb('softirq_exit', softirq=current_softirqs[0]) del current_softirqs[0] lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/block.py0000664000175000017500000001055212745737273025525 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import sp, sv class BlockStateProvider(sp.StateProvider): def __init__(self, state): cbs = { 'block_rq_complete': self._process_block_rq_complete, 'block_rq_issue': self._process_block_rq_issue, 'block_bio_remap': self._process_block_bio_remap, 'block_bio_backmerge': self._process_block_bio_backmerge, } super().__init__(state, cbs) self._remap_requests = [] def _process_block_bio_remap(self, event): dev = event['dev'] sector = event['sector'] old_dev = event['old_dev'] old_sector = event['old_sector'] for req in self._remap_requests: if req.dev == old_dev and req.sector == old_sector: req.dev = dev req.sector = sector return req = sv.BlockRemapRequest(dev, sector, old_dev, old_sector) self._remap_requests.append(req) # For backmerge requests, just remove the request from the # _remap_requests queue, because we rely later on the nr_sector # which has all the info we need def _process_block_bio_backmerge(self, event): dev = event['dev'] sector = event['sector'] for remap_req in self._remap_requests: if remap_req.dev == dev and remap_req.sector == sector: self._remap_requests.remove(remap_req) def _process_block_rq_issue(self, event): dev = event['dev'] sector = event['sector'] nr_sector = event['nr_sector'] if nr_sector == 0: return req = sv.BlockIORequest.new_from_rq_issue(event) for remap_req in self._remap_requests: if remap_req.dev == dev and remap_req.sector == sector: dev = remap_req.old_dev break if dev not in self._state.disks: self._state.disks[dev] = sv.Disk(dev) self._state.disks[dev].pending_requests[sector] = req def _process_block_rq_complete(self, event): dev = event['dev'] sector = event['sector'] nr_sector = event['nr_sector'] if nr_sector == 0: return for remap_req in self._remap_requests: if remap_req.dev == dev and remap_req.sector == sector: dev = remap_req.old_dev self._remap_requests.remove(remap_req) break if dev not in self._state.disks: self._state.disks[dev] = sv.Disk(dev) disk = self._state.disks[dev] # Ignore rq_complete without matching rq_issue if sector not in disk.pending_requests: return req = disk.pending_requests[sector] # Ignore rq_complete if nr_sector does not match rq_issue's if req.nr_sector != nr_sector: return req.update_from_rq_complete(event) if req.tid in self._state.tids.keys(): proc = self._state.tids[req.tid] else: proc = None self._state.send_notification_cb('block_rq_complete', req=req, proc=proc, cpu_id=event['cpu_id'], disk=disk) del disk.pending_requests[sector] lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/statedump.py0000664000175000017500000001245312745737273026443 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import os from . import sp, sv class StatedumpStateProvider(sp.StateProvider): def __init__(self, state): cbs = { 'lttng_statedump_process_state': self._process_lttng_statedump_process_state, 'lttng_statedump_file_descriptor': self._process_lttng_statedump_file_descriptor, 'lttng_statedump_block_device': self._process_lttng_statedump_block_device } super().__init__(state, cbs) def _process_lttng_statedump_block_device(self, event): dev = event['dev'] diskname = event['diskname'] if dev not in self._state.disks: self._state.disks[dev] = sv.Disk(dev, diskname=diskname) elif self._state.disks[dev].diskname is None: self._state.disks[dev].diskname = diskname self._state.send_notification_cb('lttng_statedump_block_device', dev=dev, diskname=diskname) def _process_lttng_statedump_process_state(self, event): tid = event['tid'] pid = event['pid'] name = event['name'] # prio is not in the payload for LTTng-modules < 2.8. Using # get() will set it to None if the key is not found prio = event.get('prio') if tid not in self._state.tids: self._state.tids[tid] = sv.Process(tid=tid) proc = self._state.tids[tid] # Even if the process got created earlier, some info might be # missing, add it now. proc.pid = pid proc.comm = name # However don't override the prio value if we already got the # information from sched_* events. if proc.prio is None: proc.prio = prio if pid != tid: # create the parent if pid not in self._state.tids: # FIXME: why is the parent's name set to that of the # child? does that make sense? # tid == pid for the parent process self._state.tids[pid] = sv.Process(tid=pid, pid=pid, comm=name) parent = self._state.tids[pid] # If the thread had opened FDs, they need to be assigned # to the parent. StatedumpStateProvider._assign_fds_to_parent(proc, parent) self._state.send_notification_cb('create_parent_proc', proc=proc, parent_proc=parent) def _process_lttng_statedump_file_descriptor(self, event): pid = event['pid'] fd = event['fd'] filename = event['filename'] cloexec = event['flags'] & os.O_CLOEXEC == os.O_CLOEXEC if pid not in self._state.tids: self._state.tids[pid] = sv.Process(tid=pid, pid=pid) proc = self._state.tids[pid] if fd not in proc.fds: proc.fds[fd] = sv.FD(fd, filename, sv.FDType.unknown, cloexec) self._state.send_notification_cb('create_fd', fd=fd, parent_proc=proc, timestamp=event.timestamp, cpu_id=event['cpu_id']) else: # just fix the filename proc.fds[fd].filename = filename self._state.send_notification_cb('update_fd', fd=fd, parent_proc=proc, timestamp=event.timestamp, cpu_id=event['cpu_id']) @staticmethod def _assign_fds_to_parent(proc, parent): if proc.fds: toremove = [] for fd in proc.fds: if fd not in parent.fds: parent.fds[fd] = proc.fds[fd] else: # best effort to fix the filename if not parent.fds[fd].filename: parent.fds[fd].filename = proc.fds[fd].filename toremove.append(fd) for fd in toremove: del proc.fds[fd] lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/net.py0000664000175000017500000000556112665072151025211 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import sp, sv class NetStateProvider(sp.StateProvider): def __init__(self, state): cbs = { 'net_dev_xmit': self._process_net_dev_xmit, 'netif_receive_skb': self._process_netif_receive_skb, } super().__init__(state, cbs) def _process_net_dev_xmit(self, event): self._state.send_notification_cb('net_dev_xmit', iface_name=event['name'], sent_bytes=event['len'], cpu_id=event['cpu_id']) cpu_id = event['cpu_id'] if cpu_id not in self._state.cpus: return cpu = self._state.cpus[cpu_id] if cpu.current_tid is None: return proc = self._state.tids[cpu.current_tid] current_syscall = proc.current_syscall if current_syscall is None: return if proc.pid is not None and proc.pid != proc.tid: proc = self._state.tids[proc.pid] if current_syscall.name in sv.SyscallConsts.WRITE_SYSCALLS: # TODO: find a way to set fd_type on the write rq to allow # setting FD Type if FD hasn't yet been created fd = current_syscall.io_rq.fd if fd in proc.fds and proc.fds[fd].fd_type == sv.FDType.unknown: proc.fds[fd].fd_type = sv.FDType.maybe_net def _process_netif_receive_skb(self, event): self._state.send_notification_cb('netif_receive_skb', iface_name=event['name'], recv_bytes=event['len'], cpu_id=event['cpu_id']) lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/sched.py0000664000175000017500000002074612775773625025533 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import sp, sv from ..common import version_utils class SchedStateProvider(sp.StateProvider): # The priority offset for sched_wak* events was fixed in # lttng-modules 2.7.1 upwards PRIO_OFFSET_FIX_VERSION = version_utils.Version(2, 7, 1) def __init__(self, state): cbs = { 'sched_switch': self._process_sched_switch, 'sched_migrate_task': self._process_sched_migrate_task, 'sched_wakeup': self._process_sched_wakeup, 'sched_wakeup_new': self._process_sched_wakeup, 'sched_waking': self._process_sched_wakeup, 'sched_process_fork': self._process_sched_process_fork, 'sched_process_exec': self._process_sched_process_exec, 'sched_pi_setprio': self._process_sched_pi_setprio, } super().__init__(state, cbs) def _sched_switch_per_cpu(self, cpu_id, next_tid): if cpu_id not in self._state.cpus: self._state.cpus[cpu_id] = sv.CPU(cpu_id) cpu = self._state.cpus[cpu_id] # exclude swapper process if next_tid == 0: cpu.current_tid = None else: cpu.current_tid = next_tid def _create_proc(self, tid): if tid not in self._state.tids: if tid == 0: # special case for the swapper self._state.tids[tid] = sv.Process(tid=tid, pid=0) else: self._state.tids[tid] = sv.Process(tid=tid) def _sched_switch_per_tid(self, next_tid, next_comm, prev_tid): # Instantiate processes if new self._create_proc(prev_tid) self._create_proc(next_tid) next_proc = self._state.tids[next_tid] next_proc.comm = next_comm next_proc.prev_tid = prev_tid def _check_prio_changed(self, timestamp, tid, prio): # Ignore swapper if tid == 0: return proc = self._state.tids[tid] if proc.prio != prio: proc.prio = prio self._state.send_notification_cb( 'prio_changed', timestamp=timestamp, tid=tid, prio=prio) def _process_sched_switch(self, event): timestamp = event.timestamp cpu_id = event['cpu_id'] next_tid = event['next_tid'] next_comm = event['next_comm'] next_prio = event['next_prio'] prev_tid = event['prev_tid'] prev_prio = event['prev_prio'] prev_comm = event['prev_comm'] self._sched_switch_per_cpu(cpu_id, next_tid) self._sched_switch_per_tid(next_tid, next_comm, prev_tid) self._check_prio_changed(timestamp, prev_tid, prev_prio) self._check_prio_changed(timestamp, next_tid, next_prio) wakee_proc = self._state.tids[next_tid] waker_proc = None if wakee_proc.last_waker is not None: waker_proc = self._state.tids[wakee_proc.last_waker] cb_data = { 'timestamp': timestamp, 'cpu_id': cpu_id, 'prev_tid': prev_tid, 'next_tid': next_tid, 'next_comm': next_comm, 'wakee_proc': wakee_proc, 'waker_proc': waker_proc, 'prev_comm': prev_comm, } self._state.send_notification_cb('sched_switch_per_cpu', **cb_data) self._state.send_notification_cb('sched_switch_per_tid', **cb_data) wakee_proc.last_wakeup = None wakee_proc.last_waker = None def _process_sched_migrate_task(self, event): tid = event['tid'] prio = event['prio'] if tid not in self._state.tids: proc = sv.Process() proc.tid = tid proc.comm = event['comm'] self._state.tids[tid] = proc else: proc = self._state.tids[tid] self._state.send_notification_cb( 'sched_migrate_task', proc=proc, cpu_id=event['cpu_id']) self._check_prio_changed(event.timestamp, tid, prio) def _process_sched_wakeup(self, event): target_cpu = event['target_cpu'] current_cpu = event['cpu_id'] prio = event['prio'] tid = event['tid'] if self._state.tracer_version < self.PRIO_OFFSET_FIX_VERSION: prio -= 100 if target_cpu not in self._state.cpus: self._state.cpus[target_cpu] = sv.CPU(target_cpu) if current_cpu not in self._state.cpus: self._state.cpus[current_cpu] = sv.CPU(current_cpu) # If the TID is already executing on a CPU, ignore this wakeup for cpu_id in self._state.cpus: cpu = self._state.cpus[cpu_id] if cpu.current_tid == tid: return if tid not in self._state.tids: proc = sv.Process() proc.tid = tid self._state.tids[tid] = proc self._check_prio_changed(event.timestamp, tid, prio) # A process can be woken up multiple times, only record # the first one if self._state.tids[tid].last_wakeup is None: self._state.tids[tid].last_wakeup = event.timestamp if self._state.cpus[current_cpu].current_tid is not None: self._state.tids[tid].last_waker = \ self._state.cpus[current_cpu].current_tid def _process_sched_process_fork(self, event): child_tid = event['child_tid'] child_pid = event['child_pid'] child_comm = event['child_comm'] parent_pid = event['parent_pid'] parent_tid = event['parent_pid'] parent_comm = event['parent_comm'] if parent_tid not in self._state.tids: self._state.tids[parent_tid] = sv.Process( parent_tid, parent_pid, parent_comm) else: self._state.tids[parent_tid].pid = parent_pid self._state.tids[parent_tid].comm = parent_comm parent_proc = self._state.tids[parent_pid] child_proc = sv.Process(child_tid, child_pid, child_comm) for fd in parent_proc.fds: old_fd = parent_proc.fds[fd] child_proc.fds[fd] = sv.FD.new_from_fd(old_fd) # Note: the parent_proc key in the notification function # refers to the parent of the FD, which in this case is # the child_proc created by the fork self._state.send_notification_cb( 'create_fd', fd=fd, parent_proc=child_proc, timestamp=event.timestamp, cpu_id=event['cpu_id']) self._state.tids[child_tid] = child_proc def _process_sched_process_exec(self, event): tid = event['tid'] if tid not in self._state.tids: proc = sv.Process() proc.tid = tid self._state.tids[tid] = proc else: proc = self._state.tids[tid] # Use LTTng procname context if available if 'procname' in event: proc.comm = event['procname'] toremove = [] for fd in proc.fds: if proc.fds[fd].cloexec: toremove.append(fd) for fd in toremove: self._state.send_notification_cb( 'close_fd', fd=fd, parent_proc=proc, timestamp=event.timestamp, cpu_id=event['cpu_id']) del proc.fds[fd] def _process_sched_pi_setprio(self, event): timestamp = event.timestamp newprio = event['newprio'] tid = event['tid'] self._check_prio_changed(timestamp, tid, newprio) lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/sp.py0000664000175000017500000000342012665072151025035 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. class StateProvider: def __init__(self, state, cbs): self._state = state self._cbs = cbs def process_event(self, ev): name = ev.name if name in self._cbs: self._cbs[name](ev) # for now we process all the syscalls at the same place elif 'syscall_entry' in self._cbs and \ (name.startswith('sys_') or name.startswith('syscall_entry_')): self._cbs['syscall_entry'](ev) elif 'syscall_exit' in self._cbs and \ (name.startswith('exit_syscall') or name.startswith('syscall_exit_')): self._cbs['syscall_exit'](ev) lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/io.py0000664000175000017500000003226512775773625025053 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import os import socket from babeltrace import CTFScope from . import sp, sv from ..common import format_utils, trace_utils class IoStateProvider(sp.StateProvider): def __init__(self, state): cbs = { 'syscall_entry': self._process_syscall_entry, 'syscall_exit': self._process_syscall_exit, 'syscall_entry_connect': self._process_connect, 'writeback_pages_written': self._process_writeback_pages_written, 'mm_vmscan_wakeup_kswapd': self._process_mm_vmscan_wakeup_kswapd, 'mm_page_free': self._process_mm_page_free } super().__init__(state, cbs) def _process_syscall_entry(self, event): # Only handle IO Syscalls name = trace_utils.get_syscall_name(event) if name not in sv.SyscallConsts.IO_SYSCALLS: return cpu_id = event['cpu_id'] if cpu_id not in self._state.cpus: return cpu = self._state.cpus[cpu_id] if cpu.current_tid is None: return proc = self._state.tids[cpu.current_tid] # check if we can fix the pid from a context self._fix_context_pid(event, proc) if name in sv.SyscallConsts.OPEN_SYSCALLS: self._track_open(event, name, proc) elif name in sv.SyscallConsts.CLOSE_SYSCALLS: self._track_close(event, name, proc) elif name in sv.SyscallConsts.READ_SYSCALLS or \ name in sv.SyscallConsts.WRITE_SYSCALLS: self._track_read_write(event, name, proc) elif name in sv.SyscallConsts.SYNC_SYSCALLS: self._track_sync(event, name, proc) def _process_syscall_exit(self, event): cpu_id = event['cpu_id'] if cpu_id not in self._state.cpus: return cpu = self._state.cpus[cpu_id] if cpu.current_tid is None: return proc = self._state.tids[cpu.current_tid] current_syscall = proc.current_syscall if current_syscall is None: return name = current_syscall.name if name not in sv.SyscallConsts.IO_SYSCALLS: return self._track_io_rq_exit(event, proc) proc.current_syscall = None def _process_connect(self, event): cpu_id = event['cpu_id'] if cpu_id not in self._state.cpus: return cpu = self._state.cpus[cpu_id] if cpu.current_tid is None: return proc = self._state.tids[cpu.current_tid] parent_proc = self._get_parent_proc(proc) # FIXME: handle on syscall_exit_connect only when succesful if 'family' in event and event['family'] == socket.AF_INET: fd = event['fd'] if fd in parent_proc.fds: parent_proc.fds[fd].filename = format_utils.format_ipv4( event['v4addr'], event['dport'] ) self._state.send_notification_cb('update_fd', fd=fd, parent_proc=proc, timestamp=event.timestamp, cpu_id=event['cpu_id']) def _process_writeback_pages_written(self, event): for cpu in self._state.cpus.values(): if cpu.current_tid is None: continue current_syscall = self._state.tids[cpu.current_tid].current_syscall if current_syscall is None: continue if current_syscall.io_rq: current_syscall.io_rq.pages_written += event['pages'] def _process_mm_vmscan_wakeup_kswapd(self, event): cpu_id = event['cpu_id'] if cpu_id not in self._state.cpus: return cpu = self._state.cpus[cpu_id] if cpu.current_tid is None: return current_syscall = self._state.tids[cpu.current_tid].current_syscall if current_syscall is None: return if current_syscall.io_rq: current_syscall.io_rq.woke_kswapd = True def _process_mm_page_free(self, event): for cpu in self._state.cpus.values(): if cpu.current_tid is None: continue proc = self._state.tids[cpu.current_tid] # if the current process is kswapd0, we need to # attribute the page freed to the process that # woke it up. if proc.comm == 'kswapd0' and proc.prev_tid > 0: proc = self._state.tids[proc.prev_tid] current_syscall = proc.current_syscall if current_syscall is None: continue if current_syscall.io_rq and current_syscall.io_rq.woke_kswapd: current_syscall.io_rq.pages_freed += 1 def _track_open(self, event, name, proc): current_syscall = proc.current_syscall if name in sv.SyscallConsts.DISK_OPEN_SYSCALLS: current_syscall.io_rq = sv.OpenIORequest.new_from_disk_open( event, proc.tid) elif name in ['accept', 'accept4']: current_syscall.io_rq = sv.OpenIORequest.new_from_accept( event, proc.tid) elif name == 'socket': current_syscall.io_rq = sv.OpenIORequest.new_from_socket( event, proc.tid) elif name in sv.SyscallConsts.DUP_OPEN_SYSCALLS: self._track_dup(event, name, proc) def _track_dup(self, event, name, proc): current_syscall = proc.current_syscall # If the process that triggered the io_rq is a thread, # its FDs are that of the parent process parent_proc = self._get_parent_proc(proc) fds = parent_proc.fds if name == 'dup': oldfd = event['fildes'] elif name in ['dup2', 'dup3']: oldfd = event['oldfd'] newfd = event['newfd'] if newfd in fds: self._close_fd(parent_proc, newfd, event.timestamp, event['cpu_id']) elif name == 'fcntl': # Only handle if cmd == F_DUPFD (0) if event['cmd'] != 0: return oldfd = event['fd'] old_file = None if oldfd in fds: old_file = fds[oldfd] current_syscall.io_rq = sv.OpenIORequest.new_from_old_fd( event, proc.tid, old_file) if name == 'dup3': cloexec = event['flags'] & os.O_CLOEXEC == os.O_CLOEXEC current_syscall.io_rq.cloexec = cloexec def _track_close(self, event, name, proc): proc.current_syscall.io_rq = sv.CloseIORequest( event.timestamp, proc.tid, event['fd']) def _track_read_write(self, event, name, proc): current_syscall = proc.current_syscall if name == 'splice': current_syscall.io_rq = sv.ReadWriteIORequest.new_from_splice( event, proc.tid) return elif name == 'sendfile64': current_syscall.io_rq = sv.ReadWriteIORequest.new_from_sendfile64( event, proc.tid) return if name in ['writev', 'pwritev', 'readv', 'preadv']: size_key = 'vlen' elif name == 'recvfrom': size_key = 'size' elif name == 'sendto': size_key = 'len' elif name in ['recvmsg', 'sendmsg']: size_key = None else: size_key = 'count' current_syscall.io_rq = sv.ReadWriteIORequest.new_from_fd_event( event, proc.tid, size_key) def _track_sync(self, event, name, proc): current_syscall = proc.current_syscall if name == 'sync': current_syscall.io_rq = sv.SyncIORequest.new_from_sync( event, proc.tid) elif name in ['fsync', 'fdatasync']: current_syscall.io_rq = sv.SyncIORequest.new_from_fsync( event, proc.tid) elif name == 'sync_file_range': current_syscall.io_rq = sv.SyncIORequest.new_from_sync_file_range( event, proc.tid) def _track_io_rq_exit(self, event, proc): ret = event['ret'] cpu_id = event['cpu_id'] io_rq = proc.current_syscall.io_rq # io_rq can be None in the case of fcntl when cmd is not # F_DUPFD, in which case we disregard the syscall as it did # not open any FD if io_rq is None: return io_rq.update_from_exit(event) if ret >= 0: self._create_fd(proc, io_rq, cpu_id) parent_proc = self._get_parent_proc(proc) self._state.send_notification_cb('io_rq_exit', io_rq=io_rq, proc=proc, parent_proc=parent_proc, cpu_id=cpu_id) if isinstance(io_rq, sv.CloseIORequest) and ret == 0: self._close_fd(proc, io_rq.fd, io_rq.end_ts, cpu_id) def _create_fd(self, proc, io_rq, cpu_id): parent_proc = self._get_parent_proc(proc) if io_rq.fd is not None and io_rq.fd not in parent_proc.fds: if isinstance(io_rq, sv.OpenIORequest): parent_proc.fds[io_rq.fd] = sv.FD.new_from_open_rq(io_rq) else: parent_proc.fds[io_rq.fd] = sv.FD(io_rq.fd) self._state.send_notification_cb('create_fd', fd=io_rq.fd, parent_proc=parent_proc, timestamp=io_rq.end_ts, cpu_id=cpu_id) elif isinstance(io_rq, sv.ReadWriteIORequest): if io_rq.fd_in is not None and io_rq.fd_in not in parent_proc.fds: parent_proc.fds[io_rq.fd_in] = sv.FD(io_rq.fd_in) self._state.send_notification_cb('create_fd', fd=io_rq.fd_in, parent_proc=parent_proc, timestamp=io_rq.end_ts, cpu_id=cpu_id) if io_rq.fd_out is not None and \ io_rq.fd_out not in parent_proc.fds: parent_proc.fds[io_rq.fd_out] = sv.FD(io_rq.fd_out) self._state.send_notification_cb('create_fd', fd=io_rq.fd_out, parent_proc=parent_proc, timestamp=io_rq.end_ts, cpu_id=cpu_id) def _close_fd(self, proc, fd, timestamp, cpu_id): parent_proc = self._get_parent_proc(proc) self._state.send_notification_cb('close_fd', fd=fd, parent_proc=parent_proc, timestamp=timestamp, cpu_id=cpu_id) del parent_proc.fds[fd] def _get_parent_proc(self, proc): if proc.pid is not None and proc.tid != proc.pid: parent_proc = self._state.tids[proc.pid] else: parent_proc = proc return parent_proc def _fix_context_pid(self, event, proc): for context in event.field_list_with_scope( CTFScope.STREAM_EVENT_CONTEXT): if context != 'pid': continue # make sure the 'pid' field is not also in the event # payload, otherwise we might clash for context in event.field_list_with_scope( CTFScope.EVENT_FIELDS): if context == 'pid': return if proc.pid is None: proc.pid = event['pid'] if event['pid'] != proc.tid: proc.pid = event['pid'] parent_proc = sv.Process(proc.pid, proc.pid, proc.comm, proc.prio) self._state.tids[parent_proc.pid] = parent_proc lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/automaton.py0000664000175000017500000000636712745737273026453 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from .sched import SchedStateProvider from .mem import MemStateProvider from .irq import IrqStateProvider from .syscalls import SyscallsStateProvider from .io import IoStateProvider from .statedump import StatedumpStateProvider from .block import BlockStateProvider from .net import NetStateProvider from .sv import MemoryManagement class State: def __init__(self): self.cpus = {} self.tids = {} self.disks = {} self.mm = MemoryManagement() self._notification_cbs = {} # State changes can be handled differently depending on # version of tracer used, so keep track of it. self._tracer_version = None def register_notification_cbs(self, period_data, cbs): for name in cbs: if name not in self._notification_cbs: self._notification_cbs[name] = [] # Store the callback in the form of (period_data, function) self._notification_cbs[name].append((period_data, cbs[name])) def send_notification_cb(self, name, **kwargs): if name in self._notification_cbs: for cb_tuple in self._notification_cbs[name]: cb_tuple[1](cb_tuple[0], **kwargs) def clear_period_notification_cbs(self, period_data): for name in self._notification_cbs: for cb in self._notification_cbs[name]: if cb[0] == period_data: self._notification_cbs[name].remove(cb) class Automaton: def __init__(self): self._state = State() self._state_providers = [ SchedStateProvider(self._state), MemStateProvider(self._state), IrqStateProvider(self._state), SyscallsStateProvider(self._state), IoStateProvider(self._state), StatedumpStateProvider(self._state), BlockStateProvider(self._state), NetStateProvider(self._state) ] def process_event(self, ev): for sp in self._state_providers: sp.process_event(ev) @property def state(self): return self._state lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/mem.py0000664000175000017500000000601312665072151025172 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import sp class MemStateProvider(sp.StateProvider): def __init__(self, state): cbs = { 'mm_page_alloc': self._process_mm_page_alloc, 'kmem_mm_page_alloc': self._process_mm_page_alloc, 'mm_page_free': self._process_mm_page_free, 'kmem_mm_page_free': self._process_mm_page_free, } super().__init__(state, cbs) def _get_current_proc(self, event): cpu_id = event['cpu_id'] if cpu_id not in self._state.cpus: return None cpu = self._state.cpus[cpu_id] if cpu.current_tid is None: return None return self._state.tids[cpu.current_tid] def _process_mm_page_alloc(self, event): self._state.mm.page_count += 1 # Increment the number of pages allocated during the execution # of all currently syscall io requests for process in self._state.tids.values(): if process.current_syscall is None: continue if process.current_syscall.io_rq: process.current_syscall.io_rq.pages_allocated += 1 current_process = self._get_current_proc(event) if current_process is None: return self._state.send_notification_cb('tid_page_alloc', proc=current_process, cpu_id=event['cpu_id']) def _process_mm_page_free(self, event): if self._state.mm.page_count == 0: return self._state.mm.page_count -= 1 current_process = self._get_current_proc(event) if current_process is None: return self._state.send_notification_cb('tid_page_free', proc=current_process, cpu_id=event['cpu_id']) lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/syscalls.py0000664000175000017500000000533412665072151026256 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import sp, sv class SyscallsStateProvider(sp.StateProvider): def __init__(self, state): cbs = { 'syscall_entry': self._process_syscall_entry, 'syscall_exit': self._process_syscall_exit } super().__init__(state, cbs) def _process_syscall_entry(self, event): cpu_id = event['cpu_id'] if cpu_id not in self._state.cpus: return cpu = self._state.cpus[cpu_id] if cpu.current_tid is None: return proc = self._state.tids[cpu.current_tid] proc.current_syscall = sv.SyscallEvent.new_from_entry(event) def _process_syscall_exit(self, event): cpu_id = event['cpu_id'] if cpu_id not in self._state.cpus: return cpu = self._state.cpus[cpu_id] if cpu.current_tid is None: return proc = self._state.tids[cpu.current_tid] current_syscall = proc.current_syscall if current_syscall is None: return current_syscall.process_exit(event) self._state.send_notification_cb('syscall_exit', proc=proc, event=event, cpu_id=cpu_id) # If it's an IO Syscall, the IO state provider will take care of # clearing the current syscall, so only clear here if it's not if current_syscall.name not in sv.SyscallConsts.IO_SYSCALLS: self._state.tids[cpu.current_tid].current_syscall = None lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/__init__.py0000664000175000017500000000217512665072151026160 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. lttnganalyses-0.6.1/lttnganalyses/linuxautomaton/sv.py0000664000175000017500000003630012745737273025062 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import os import socket from ..common import format_utils, trace_utils class Process(): def __init__(self, tid=None, pid=None, comm='', prio=None): self.tid = tid self.pid = pid self.comm = comm self.prio = prio # indexed by fd self.fds = {} self.current_syscall = None # the process scheduled before this one self.prev_tid = None self.last_wakeup = None self.last_waker = None class CPU(): def __init__(self, cpu_id): self.cpu_id = cpu_id self.current_tid = None self.current_hard_irq = None # softirqs use a dict because multiple ones can be raised before # handling. They are indexed by vec, and each entry is a list, # ordered chronologically self.current_softirqs = {} class MemoryManagement(): def __init__(self): self.page_count = 0 class SyscallEvent(): def __init__(self, name, begin_ts): self.name = name self.begin_ts = begin_ts self.end_ts = None self.ret = None self.duration = None # Only applicable to I/O syscalls self.io_rq = None def process_exit(self, event): self.end_ts = event.timestamp # On certain architectures (notably arm32), lttng-modules # versions prior to 2.8 would erroneously trace certain # syscalls (e.g. mmap2) without their return value. In this # case, get() will simply set self.ret to None. These syscalls # with a None return value should simply be ignored down the # line. self.ret = event.get('ret') self.duration = self.end_ts - self.begin_ts @classmethod def new_from_entry(cls, event): name = trace_utils.get_syscall_name(event) return cls(name, event.timestamp) class Disk(): def __init__(self, dev, diskname=None): self.dev = dev self.diskname = diskname # pending block IO Requests, indexed by sector self.pending_requests = {} class FDType(): unknown = 0 disk = 1 net = 2 # not 100% sure they are network FDs (assumed when net_dev_xmit is # called during a write syscall and the type in unknown). maybe_net = 3 @staticmethod def get_fd_type(name, family): if name in SyscallConsts.NET_OPEN_SYSCALLS: if family in SyscallConsts.INET_FAMILIES: return FDType.net if family in SyscallConsts.DISK_FAMILIES: return FDType.disk if name in SyscallConsts.DISK_OPEN_SYSCALLS: return FDType.disk return FDType.unknown class FD(): def __init__(self, fd, filename='unknown', fd_type=FDType.unknown, cloexec=False, family=None): self.fd = fd self.filename = filename self.fd_type = fd_type self.cloexec = cloexec self.family = family @classmethod def new_from_fd(cls, fd): return cls(fd.fd, fd.filename, fd.fd_type, fd.cloexec, fd.family) @classmethod def new_from_open_rq(cls, io_rq): return cls(io_rq.fd, io_rq.filename, io_rq.fd_type, io_rq.cloexec, io_rq.family) class IRQ(): def __init__(self, id, cpu_id, begin_ts=None): self.id = id self.cpu_id = cpu_id self.begin_ts = begin_ts self.end_ts = None @property def duration(self): if not self.end_ts or not self.begin_ts: return None return self.end_ts - self.begin_ts class HardIRQ(IRQ): def __init__(self, id, cpu_id, begin_ts): super().__init__(id, cpu_id, begin_ts) self.ret = None @classmethod def new_from_irq_handler_entry(cls, event): id = event['irq'] cpu_id = event['cpu_id'] begin_ts = event.timestamp return cls(id, cpu_id, begin_ts) class SoftIRQ(IRQ): def __init__(self, id, cpu_id, raise_ts=None, begin_ts=None): super().__init__(id, cpu_id, begin_ts) self.raise_ts = raise_ts @classmethod def new_from_softirq_raise(cls, event): id = event['vec'] cpu_id = event['cpu_id'] raise_ts = event.timestamp return cls(id, cpu_id, raise_ts) @classmethod def new_from_softirq_entry(cls, event): id = event['vec'] cpu_id = event['cpu_id'] begin_ts = event.timestamp return cls(id, cpu_id, begin_ts=begin_ts) class IORequest(): # I/O operations OP_OPEN = 1 OP_READ = 2 OP_WRITE = 3 OP_CLOSE = 4 OP_SYNC = 5 # Operation used for requests that both read and write, # e.g. splice and sendfile OP_READ_WRITE = 6 def __init__(self, begin_ts, size, tid, operation): self.begin_ts = begin_ts self.end_ts = None self.duration = None # request size in bytes self.size = size self.operation = operation # tid of process that triggered the rq self.tid = tid # Error number if request failed self.errno = None @staticmethod def is_equivalent_operation(left_op, right_op): """Predicate used to compare equivalence of IO_OPERATION. This method is employed because OP_READ_WRITE behaves like a set containing both OP_READ and OP_WRITE and is therefore equivalent to these operations as well as itself """ if left_op == IORequest.OP_READ_WRITE: return right_op in [IORequest.OP_READ, IORequest.OP_WRITE, IORequest.OP_READ_WRITE] if left_op == IORequest.OP_READ: return right_op in [IORequest.OP_READ, IORequest.OP_READ_WRITE] if left_op == IORequest.OP_WRITE: return right_op in [IORequest.OP_WRITE, IORequest.OP_READ_WRITE] return left_op == right_op class SyscallIORequest(IORequest): def __init__(self, begin_ts, size, tid, operation, syscall_name): super().__init__(begin_ts, None, tid, operation) self.fd = None self.syscall_name = syscall_name # Number of pages alloc'd/freed/written to disk during the rq self.pages_allocated = 0 self.pages_freed = 0 self.pages_written = 0 # Whether kswapd was forced to wakeup during the rq self.woke_kswapd = False def update_from_exit(self, event): self.end_ts = event.timestamp self.duration = self.end_ts - self.begin_ts if event['ret'] < 0: self.errno = -event['ret'] class OpenIORequest(SyscallIORequest): def __init__(self, begin_ts, tid, syscall_name, filename, fd_type): super().__init__(begin_ts, None, tid, IORequest.OP_OPEN, syscall_name) # FD set on syscall exit self.fd = None self.filename = filename self.fd_type = fd_type self.family = None self.cloexec = False def update_from_exit(self, event): super().update_from_exit(event) if event['ret'] >= 0: self.fd = event['ret'] @classmethod def new_from_disk_open(cls, event, tid): begin_ts = event.timestamp name = trace_utils.get_syscall_name(event) filename = event['filename'] req = cls(begin_ts, tid, name, filename, FDType.disk) req.cloexec = event['flags'] & os.O_CLOEXEC == os.O_CLOEXEC return req @classmethod def new_from_accept(cls, event, tid): # Handle both accept and accept4 begin_ts = event.timestamp name = trace_utils.get_syscall_name(event) req = cls(begin_ts, tid, name, 'socket', FDType.net) if 'family' in event: req.family = event['family'] # Set filename to ip:port if INET socket if req.family == socket.AF_INET: req.filename = format_utils.format_ipv4( event['v4addr'], event['sport'] ) return req @classmethod def new_from_socket(cls, event, tid): begin_ts = event.timestamp req = cls(begin_ts, tid, 'socket', 'socket', FDType.net) if 'family' in event: req.family = event['family'] return req @classmethod def new_from_old_fd(cls, event, tid, old_fd): begin_ts = event.timestamp name = trace_utils.get_syscall_name(event) if old_fd is None: filename = 'unknown' fd_type = FDType.unknown else: filename = old_fd.filename fd_type = old_fd.fd_type return cls(begin_ts, tid, name, filename, fd_type) class CloseIORequest(SyscallIORequest): def __init__(self, begin_ts, tid, fd): super().__init__(begin_ts, None, tid, IORequest.OP_CLOSE, 'close') self.fd = fd class ReadWriteIORequest(SyscallIORequest): def __init__(self, begin_ts, size, tid, operation, syscall_name): super().__init__(begin_ts, size, tid, operation, syscall_name) # The size returned on syscall exit, in bytes. May differ from # the size initially requested self.returned_size = None # Unused if fd is set self.fd_in = None self.fd_out = None def update_from_exit(self, event): super().update_from_exit(event) ret = event['ret'] if ret >= 0: self.returned_size = ret # Set the size to the returned one if none was set at # entry, as with recvmsg or sendmsg if self.size is None: self.size = ret @classmethod def new_from_splice(cls, event, tid): begin_ts = event.timestamp size = event['len'] req = cls(begin_ts, size, tid, IORequest.OP_READ_WRITE, 'splice') req.fd_in = event['fd_in'] req.fd_out = event['fd_out'] return req @classmethod def new_from_sendfile64(cls, event, tid): begin_ts = event.timestamp size = event['count'] req = cls(begin_ts, size, tid, IORequest.OP_READ_WRITE, 'sendfile64') req.fd_in = event['in_fd'] req.fd_out = event['out_fd'] return req @classmethod def new_from_fd_event(cls, event, tid, size_key): begin_ts = event.timestamp # Some events, like recvmsg or sendmsg, only have size info on return if size_key is not None: size = event[size_key] else: size = None syscall_name = trace_utils.get_syscall_name(event) if syscall_name in SyscallConsts.READ_SYSCALLS: operation = IORequest.OP_READ else: operation = IORequest.OP_WRITE req = cls(begin_ts, size, tid, operation, syscall_name) req.fd = event['fd'] return req class SyncIORequest(SyscallIORequest): def __init__(self, begin_ts, size, tid, syscall_name): super().__init__(begin_ts, size, tid, IORequest.OP_SYNC, syscall_name) @classmethod def new_from_sync(cls, event, tid): begin_ts = event.timestamp size = None return cls(begin_ts, size, tid, 'sync') @classmethod def new_from_fsync(cls, event, tid): # Also handle fdatasync begin_ts = event.timestamp size = None syscall_name = trace_utils.get_syscall_name(event) req = cls(begin_ts, size, tid, syscall_name) req.fd = event['fd'] return req @classmethod def new_from_sync_file_range(cls, event, tid): begin_ts = event.timestamp size = event['nbytes'] req = cls(begin_ts, size, tid, 'sync_file_range') req.fd = event['fd'] return req class BlockIORequest(IORequest): # Logical sector size in bytes, according to the kernel SECTOR_SIZE = 512 def __init__(self, begin_ts, tid, operation, dev, sector, nr_sector): size = nr_sector * BlockIORequest.SECTOR_SIZE super().__init__(begin_ts, size, tid, operation) self.dev = dev self.sector = sector self.nr_sector = nr_sector def update_from_rq_complete(self, event): self.end_ts = event.timestamp self.duration = self.end_ts - self.begin_ts @classmethod def new_from_rq_issue(cls, event): begin_ts = event.timestamp dev = event['dev'] sector = event['sector'] nr_sector = event['nr_sector'] tid = event['tid'] # An even rwbs indicates read operation, odd indicates write if event['rwbs'] % 2 == 0: operation = IORequest.OP_READ else: operation = IORequest.OP_WRITE return cls(begin_ts, tid, operation, dev, sector, nr_sector) class BlockRemapRequest(): def __init__(self, dev, sector, old_dev, old_sector): self.dev = dev self.sector = sector self.old_dev = old_dev self.old_sector = old_sector class SyscallConsts(): # TODO: decouple socket/family logic from this class INET_FAMILIES = [socket.AF_INET, socket.AF_INET6] DISK_FAMILIES = [socket.AF_UNIX] # list nof syscalls that open a FD on disk (in the exit_syscall event) DISK_OPEN_SYSCALLS = ['open', 'openat'] # list of syscalls that open a FD on the network # (in the exit_syscall event) NET_OPEN_SYSCALLS = ['socket'] # list of syscalls that can duplicate a FD DUP_OPEN_SYSCALLS = ['fcntl', 'dup', 'dup2', 'dup3'] SYNC_SYSCALLS = ['sync', 'sync_file_range', 'fsync', 'fdatasync'] # merge the 3 open lists OPEN_SYSCALLS = DISK_OPEN_SYSCALLS + NET_OPEN_SYSCALLS + DUP_OPEN_SYSCALLS # list of syscalls that close a FD (in the 'fd =' field) CLOSE_SYSCALLS = ['close'] # list of syscall that read on a FD, value in the exit_syscall following READ_SYSCALLS = ['read', 'recvmsg', 'recvfrom', 'readv', 'pread', 'pread64', 'preadv'] # list of syscall that write on a FD, value in the exit_syscall following WRITE_SYSCALLS = ['write', 'sendmsg', 'sendto', 'writev', 'pwrite', 'pwrite64', 'pwritev'] # list of syscalls that both read and write on two FDs READ_WRITE_SYSCALLS = ['splice', 'sendfile64'] # All I/O related syscalls IO_SYSCALLS = OPEN_SYSCALLS + CLOSE_SYSCALLS + READ_SYSCALLS + \ WRITE_SYSCALLS + SYNC_SYSCALLS + READ_WRITE_SYSCALLS lttnganalyses-0.6.1/lttnganalyses/__init__.py0000664000175000017500000000015012553274232023060 0ustar mjeansonmjeanson00000000000000"""TODO""" from ._version import get_versions __version__ = get_versions()['version'] del get_versions lttnganalyses-0.6.1/lttng-schedfreq0000775000175000017500000000234512665072151021107 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import sched if __name__ == '__main__': sched.runfreq() lttnganalyses-0.6.1/LICENSE0000664000175000017500000000031712665072151017071 0ustar mjeansonmjeanson00000000000000LTTng-Analyses - Licensing These analyses are released under the MIT license. This license is used to allow the use of these analyses in both free and proprietary software. See mit-license.txt for details. lttnganalyses-0.6.1/lttng-iousagetop0000775000175000017500000000234712553274232021324 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import io if __name__ == '__main__': io.runusage() lttnganalyses-0.6.1/lttng-schedstats0000775000175000017500000000235312665072151021307 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import sched if __name__ == '__main__': sched.runstats() lttnganalyses-0.6.1/lttng-cputop0000775000175000017500000000235112553274232020452 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import cputop if __name__ == '__main__': cputop.run() lttnganalyses-0.6.1/PKG-INFO0000664000175000017500000023373213033742625017172 0ustar mjeansonmjeanson00000000000000Metadata-Version: 1.1 Name: lttnganalyses Version: 0.6.1 Summary: LTTng analyses Home-page: https://github.com/lttng/lttng-analyses Author: Julien Desfossez Author-email: jdesfossez@efficios.com License: MIT Description: LTTng analyses ************** .. image:: https://img.shields.io/pypi/v/lttnganalyses.svg?label=Latest%20version :target: https://pypi.python.org/pypi/lttnganalyses :alt: Latest version released on PyPi .. image:: https://travis-ci.org/lttng/lttng-analyses.svg?branch=master&label=Travis%20CI%20build :target: https://travis-ci.org/lttng/lttng-analyses :alt: Status of Travis CI .. image:: https://img.shields.io/jenkins/s/https/ci.lttng.org/lttng-analyses_master_build.svg?label=LTTng%20CI%20build :target: https://ci.lttng.org/job/lttng-analyses_master_build :alt: Status of LTTng CI The **LTTng analyses** are a set of various executable analyses to extract and visualize monitoring data and metrics from `LTTng `_ kernel traces on the command line. As opposed to other "live" diagnostic or monitoring solutions, this approach is based on the following workflow: #. Record your system's activity with LTTng, a low-overhead tracer. #. Do whatever it takes for your problem to occur. #. Diagnose your problem's cause **offline** (when tracing is stopped). This solution allows you to target problems that are hard to find and to "dig" until the root cause is found. **Current limitations**: - The LTTng analyses can be quite slow to execute. There are a number of places where they could be optimized, but using the Python interpreter seems to be an important impediment. This project is regarded by its authors as a testing ground to experiment analysis features, user interfaces, and usability in general. It is not considered ready to analyze long traces. **Contents**: .. contents:: :local: :depth: 3 :backlinks: none Install LTTng analyses ====================== .. NOTE:: The version 2.0 of `Trace Compass `_ requires LTTng analyses 0.4: Trace Compass 2.0 is not compatible with LTTng analyses 0.5 and after. In this case, we suggest that you install LTTng analyses from the ``stable-0.4`` branch of the project's Git repository (see `Install from the Git repository`_). You can also `download `_ the latest 0.4 release tarball and follow the `Install from a release tarball`_ procedure. Required dependencies --------------------- - `Python `_ ≥ 3.4 - `setuptools `_ - `pyparsing `_ ≥ 2.0.0 - `Babeltrace `_ ≥ 1.2 with Python bindings (``--enable-python-bindings`` when building from source) Optional dependencies --------------------- - `LTTng `_ ≥ 2.5: to use the ``lttng-analyses-record`` script and to trace the system in general - `termcolor `_: color support - `progressbar `_: terminal progress bar support (this is not required for the machine interface's progress indication feature) Install from PyPI (online repository) ------------------------------------- To install the latest LTTng analyses release on your system from `PyPI `_: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. Make sure ``pip`` for Python 3 is installed on your system. The package is named ``python3-pip`` on most distributions (``python-pip`` on Arch Linux). #. Use ``pip3`` to install LTTng analyses: .. code-block:: bash sudo pip3 install --upgrade lttnganalyses Note that you can also install LTTng analyses locally, only for your user: .. code-block:: bash pip3 install --user --upgrade lttnganalyses Files are installed in ``~/.local``, therefore ``~/.local/bin`` must be part of your ``PATH`` environment variable for the LTTng analyses to be launchable. Install from a release tarball ------------------------------ To install a specific LTTng analyses release (tarball) on your system: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. `Download `_ and extract the desired release tarball. #. Use ``setup.py`` to install LTTng analyses: .. code-block:: bash sudo ./setup.py install Install from the Git repository ------------------------------- To install LTTng analyses from a specific branch or tag of the project's Git repository: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. Make sure ``pip`` for Python 3 is installed on your system. The package is named ``python3-pip`` on most distributions (``python-pip`` on Arch Linux). #. Use ``pip3`` to install LTTng analyses: .. code-block:: bash sudo pip3 install --upgrade git+git://github.com/lttng/lttng-analyses.git@master Replace ``master`` with the desired branch or tag name to install in the previous URL. Note that you can also install LTTng analyses locally, only for your user: .. code-block:: bash sudo pip3 install --user --upgrade git+git://github.com/lttng/lttng-analyses.git@master Files are installed in ``~/.local``, therefore ``~/.local/bin`` must be part of your ``PATH`` environment variable for the LTTng analyses to be launchable. Install on Ubuntu ----------------- To install LTTng analyses on Ubuntu ≥ 12.04: #. Add the *LTTng Latest Stable* PPA repository: .. code-block:: bash sudo apt-get install -y software-properties-common sudo apt-add-repository -y ppa:lttng/ppa sudo apt-get update Replace ``software-properties-common`` with ``python-software-properties`` on Ubuntu 12.04. #. Install the required dependencies: .. code-block:: bash sudo apt-get install -y babeltrace sudo apt-get install -y python3-babeltrace sudo apt-get install -y python3-setuptools On Ubuntu > 12.04: .. code-block:: bash sudo apt-get install -y python3-pyparsing On Ubuntu 12.04: .. code-block:: bash sudo pip3 install --upgrade pyparsing #. **Optional**: Install the optional dependencies: .. code-block:: bash sudo apt-get install -y lttng-tools sudo apt-get install -y lttng-modules-dkms sudo apt-get install -y python3-progressbar sudo apt-get install -y python3-termcolor #. Install LTTng analyses: .. code-block:: bash sudo apt-get install -y python3-lttnganalyses Install on Debian "sid" ----------------------- To install LTTng analyses on Debian "sid": #. Install the required dependencies: .. code-block:: bash sudo apt-get install -y babeltrace sudo apt-get install -y python3-babeltrace sudo apt-get install -y python3-setuptools sudo apt-get install -y python3-pyparsing #. **Optional**: Install the optional dependencies: .. code-block:: bash sudo apt-get install -y lttng-tools sudo apt-get install -y lttng-modules-dkms sudo apt-get install -y python3-progressbar sudo apt-get install -y python3-termcolor #. Install LTTng analyses: .. code-block:: bash sudo apt-get install -y python3-lttnganalyses Record a trace ============== This section is a quick reminder of how to record an LTTng kernel trace. See LTTng's `quick start guide `_ to familiarize with LTTng. Automatic --------- LTTng analyses ships with a handy (installed) script, ``lttng-analyses-record``, which automates the steps to record a kernel trace with the events required by the analyses. To use ``lttng-analyses-record``: #. Launch the installed script: .. code-block:: bash lttng-analyses-record #. Do whatever it takes for your problem to occur. #. When you are done recording, press Ctrl+C where the script is running. Manual ------ To record an LTTng kernel trace suitable for the LTTng analyses: #. Create a tracing session: .. code-block:: bash sudo lttng create #. Create a channel with a large sub-buffer size: .. code-block:: bash sudo lttng enable-channel --kernel chan --subbuf-size=8M #. Create event rules to capture the needed events: .. code-block:: bash sudo lttng enable-event --kernel --channel=chan block_bio_backmerge sudo lttng enable-event --kernel --channel=chan block_bio_remap sudo lttng enable-event --kernel --channel=chan block_rq_complete sudo lttng enable-event --kernel --channel=chan block_rq_issue sudo lttng enable-event --kernel --channel=chan irq_handler_entry sudo lttng enable-event --kernel --channel=chan irq_handler_exit sudo lttng enable-event --kernel --channel=chan irq_softirq_entry sudo lttng enable-event --kernel --channel=chan irq_softirq_exit sudo lttng enable-event --kernel --channel=chan irq_softirq_raise sudo lttng enable-event --kernel --channel=chan kmem_mm_page_alloc sudo lttng enable-event --kernel --channel=chan kmem_mm_page_free sudo lttng enable-event --kernel --channel=chan lttng_statedump_block_device sudo lttng enable-event --kernel --channel=chan lttng_statedump_file_descriptor sudo lttng enable-event --kernel --channel=chan lttng_statedump_process_state sudo lttng enable-event --kernel --channel=chan mm_page_alloc sudo lttng enable-event --kernel --channel=chan mm_page_free sudo lttng enable-event --kernel --channel=chan net_dev_xmit sudo lttng enable-event --kernel --channel=chan netif_receive_skb sudo lttng enable-event --kernel --channel=chan sched_pi_setprio sudo lttng enable-event --kernel --channel=chan sched_process_exec sudo lttng enable-event --kernel --channel=chan sched_process_fork sudo lttng enable-event --kernel --channel=chan sched_switch sudo lttng enable-event --kernel --channel=chan sched_wakeup sudo lttng enable-event --kernel --channel=chan sched_waking sudo lttng enable-event --kernel --channel=chan softirq_entry sudo lttng enable-event --kernel --channel=chan softirq_exit sudo lttng enable-event --kernel --channel=chan softirq_raise sudo lttng enable-event --kernel --channel=chan --syscall --all #. Start recording: .. code-block:: bash sudo lttng start #. Do whatever it takes for your problem to occur. #. Stop recording and destroy the tracing session to free its resources: .. code-block:: bash sudo lttng stop sudo lttng destroy See the `LTTng Documentation `_ for other use cases, like sending the trace data over the network instead of recording trace files on the target's file system. Run an LTTng analysis ===================== The **LTTng analyses** are a set of various command-line analyses. Each analysis accepts the path to a recorded trace (see `Record a trace`_) as its argument, as well as various command-line options to control the analysis and its output. Many command-line options are common to all the analyses, so that you can filter by timerange, process name, process ID, minimum and maximum values, and the rest. Also note that the reported timestamps can optionally be expressed in the GMT time zone. Each analysis is installed as an executable starting with the ``lttng-`` prefix. .. list-table:: Available LTTng analyses :header-rows: 1 * - Command - Description * - ``lttng-cputop`` - Per-TID, per-CPU, and total top CPU usage. * - ``lttng-iolatencyfreq`` - I/O request latency distribution. * - ``lttng-iolatencystats`` - Partition and system call latency statistics. * - ``lttng-iolatencytop`` - Top system call latencies. * - ``lttng-iolog`` - I/O operations log. * - ``lttng-iousagetop`` - I/O usage top. * - ``lttng-irqfreq`` - Interrupt handler duration frequency distribution. * - ``lttng-irqlog`` - Interrupt log. * - ``lttng-irqstats`` - Hardware and software interrupt statistics. * - ``lttng-memtop`` - Per-TID top allocated/freed memory. * - ``lttng-schedfreq`` - Scheduling latency frequency distribution. * - ``lttng-schedlog`` - Scheduling top. * - ``lttng-schedstats`` - Scheduling latency stats. * - ``lttng-schedtop`` - Scheduling top. * - ``lttng-periodlog`` - Period log. * - ``lttng-periodstats`` - Period duration stats. * - ``lttng-periodtop`` - Period duration top. * - ``lttng-periodfreq`` - Period duration frequency distribution. * - ``lttng-syscallstats`` - Per-TID and global system call statistics. Use the ``--help`` option of any command to list the descriptions of the possible command-line options. .. NOTE:: You can set the ``LTTNG_ANALYSES_DEBUG`` environment variable to ``1`` when you launch an analysis to enable a debug output. You can also use the general ``--debug`` option. Filtering options ----------------- Depending on the analysis, filter options are available. The complete list of filter options is: .. list-table:: Available filtering command-line options :header-rows: 1 * - Command-line option - Description * - ``--begin`` - Trace time at which to begin the analysis. Format: ``HH:MM:SS[.NNNNNNNNN]``. * - ``--cpu`` - Comma-delimited list of CPU IDs for which to display the results. * - ``--end`` - Trace time at which to end the analysis. Format: ``HH:MM:SS[.NNNNNNNNN]``. * - ``--irq`` - List of hardware IRQ numbers for which to display the results. * - ``--limit`` - Maximum number of output rows per table. This option is useful for "top" analyses, like ``lttng-cputop``. * - ``--min`` - Minimum duration (µs) to keep in results. * - ``--minsize`` - Minimum I/O operation size (B) to keep in results. * - ``--max`` - Maximum duration (µs) to keep in results. * - ``--maxsize`` - Maximum I/O operation size (B) to keep in results. * - ``--procname`` - Comma-delimited list of process names for which to display the results. * - ``--softirq`` - List of software IRQ numbers for which to display the results. * - ``--tid`` - Comma-delimited list of thread IDs for which to display the results. Period options -------------- LTTng analyses feature a powerful "period engine". A *period* is an interval which begins and ends under specific conditions. When the analysis results are displayed, they are isolated for the periods that were opened and closed during the process. A period can have a parent. If it's the case, then its parent needs to exist for the period to begin at all. This tree structure of periods is useful to keep a form of custom user state during the generic kernel analysis. .. ATTENTION:: The ``--period`` and ``--period-captures`` options's arguments include characters that are considered special by most shells, like ``$``, ``*``, and ``&``. Make sure to always **single-quote** those arguments when running the LTTng analyses on the command line. Period definition ~~~~~~~~~~~~~~~~~ You can define one or more periods on the command line, when launching an analysis, with the ``--period`` option. This option's argument accepts the following form (content within square brackets is optional):: [ NAME [ (PARENT) ] ] : BEGINEXPR [ : ENDEXPR ] ``NAME`` Optional name of the period definition. All periods opened from this definition have this name. The syntax of this name is the same as a C identifier. ``PARENT`` Optional name of a *previously defined* period which acts as the parent period definition of this definition. ``NAME`` must be set for ``PARENT`` to be set. ``BEGINEXPR`` Matching expression which a given event must match in order for an actual period to be instantiated by this definition. ``ENDEXPR`` Matching expression which a given event must match in order for an instance of this definition to be closed. If this part is omitted, ``BEGINEXPR`` is used for the ending expression too. Matching expression ................... A matching expression is a C-like logical expression. It supports nesting expressions with ``(`` and ``)``, as well as the ``&&`` (logical *AND*), ``||`` (logical *OR*), and ``!`` (logical *NOT*) operators. The precedence of those operators is the same as in the C language. The atomic operands in those logical expressions are comparisons. For the following comparison syntaxes, consider that: - ``EVT`` indicates an event source. The available event sources are: ``$evt`` Current event. ``$begin.$evt`` In ``BEGINEXPR``: current event (same as ``$evt``). In ``ENDEXPR``: event which, for this period instance, was matched when ``BEGINEXPR`` was evaluated. ``$parent.$begin.$evt`` Event which, for the parent period instance of this period instance, was matched when ``BEGINEXPR`` of the parent was evaluated. - ``FIELD`` indicates an event field source. The available event field sources are: ``NAME`` (direct field name) Automatic scope: try to find the field named ``NAME`` in the dynamic scopes in this order: #. Event payload #. Event context #. Event header #. Stream event context #. Packet context #. Packet header ``$payload.NAME`` Event payload field named ``NAME``. ``$ctx.NAME`` Event context field named ``NAME``. ``$header.NAME`` Event header field named ``NAME``. ``$stream_ctx.NAME`` Stream event context field named ``NAME``. ``$pkt_ctx.NAME`` Packet context field named ``NAME``. ``$pkt_header.NAME`` Packet header field named ``NAME``. - ``VALUE`` indicates one of: - A constant, decimal number. This can be an integer or a real number, positive or negative, and supports the ``e`` scientific notation. Examples: ``23``, ``-18.28``, ``7.2e9``. - A double-quoted literal string. ``"`` and ``\`` can be escaped with ``\``. Examples: ``"hello, world!"``, ``"here's another \"quoted\" string"``. - An event field, that is, ``EVT.FIELD``, considering the replacements described above. - ``NUMVALUE`` indicates one of: - A constant, decimal number. This can be an integer or a real number, positive or negative, and supports the ``e`` scientific notation. Examples: ``23``, ``-18.28``, ``7.2e9``. - An event field, that is, ``EVT.FIELD``, considering the replacements described above. .. list-table:: Available comparison syntaxes for matching expressions :header-rows: 1 * - Comparison syntax - Description * - #. ``EVT.$name == "NAME"`` #. ``EVT.$name != "NAME"`` #. ``EVT.$name =* "PATTERN"`` - Name matching: #. Name of event source ``EVT`` is equal to ``NAME``. #. Name of event source ``EVT`` is not equal to ``NAME``. #. Name of event source ``EVT`` satisfies the globbing pattern ``PATTERN`` (see `fnmatch `_). * - #. ``EVT.FIELD == VALUE`` #. ``EVT.FIELD != VALUE`` #. ``EVT.FIELD < NUMVALUE`` #. ``EVT.FIELD <= NUMVALUE`` #. ``EVT.FIELD > NUMVALUE`` #. ``EVT.FIELD >= NUMVALUE`` #. ``EVT.FIELD =* "PATTERN"`` - Value matching: #. The value of the field ``EVT.FIELD`` is equal to the value ``VALUE``. #. The value of the field ``EVT.FIELD`` is not equal to the value ``VALUE``. #. The value of the field ``EVT.FIELD`` is lesser than the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is lesser than or equal to the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is greater than the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is greater than or equal to the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` satisfies the globbing pattern ``PATTERN`` (see `fnmatch `_). In any case, if ``EVT.FIELD`` does not target an existing field, the comparison including it fails. Also, string fields cannot be compared to number values (constant or fields). Examples ........ - Create a period instance named ``switch`` when: - The current event name is ``sched_switch``. End this period instance when: - The current event name is ``sched_switch``. Period definition:: switch : $evt.$name == "sched_switch" - Create a period instance named ``switch`` when: - The current event name is ``sched_switch`` *AND* - The current event's ``next_tid`` field is *NOT* equal to 0. End this period instance when: - The current event name is ``sched_switch`` *AND* - The current event's ``prev_tid`` field is equal to the ``next_tid`` field of the matched event in the begin expression *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression. Period definition:: switch : $evt.$name == "sched_switch" && $evt.next_tid != 0 : $evt.$name == "sched_switch" && $evt.prev_tid == $begin.$evt.next_tid && $evt.cpu_id == $begin.$evt.cpu_id - Create a period instance named ``irq`` when: - A parent period instance named ``switch`` is currently opened. - The current event name satisfies the ``irq_*_entry`` globbing pattern *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression of the parent period instance. End this period instance when: - The current event name is ``irq_handler_exit`` *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression. Period definition:: irq(switch) : $evt.$name =* "irq_*_entry" && $evt.cpu_id == $parent.$begin.$evt.cpu_id : $evt.$name == "irq_handler_exit" && $evt.cpu_id == $begin.$evt.cpu_id - Create a period instance named ``hello`` when: - The current event name satisfies the ``hello*`` globbing pattern, but excludes ``hello world``. End this period instance when: - The current event name is the same as the name of the matched event in the begin expression *AND* - The current event's ``theid`` header field is lesser than or equal to 231. Period definition:: hello : $evt.$name =* "hello*" && $evt.$name != "hello world" : $evt.$name == $begin.$evt.$name && $evt.$header.theid <= 231 Period captures ~~~~~~~~~~~~~~~ When a period instance begins or ends, the analysis can capture the current values of specific event fields and display them in its results. You can set period captures with the ``--period-captures`` command-line option. This option's argument accepts the following form (content within square brackets is optional):: NAME : BEGINCAPTURES [ : ENDCAPTURES ] ``NAME`` Name of period instances on which to apply those captures. A ``--period`` option in the same command line must define this name. ``BEGINCAPTURES`` Comma-delimited list of event fields to capture when the beginning expression of the period definition named ``NAME`` is matched. ``ENDCAPTURES`` Comma-delimited list of event fields to capture when the ending expression of the period definition named ``NAME`` is matched. If this part is omitted, there are no end captures. The format of ``BEGINCAPTURES`` and ``ENDCAPTURES`` is a comma-delimited list of tokens having this format:: [ CAPTURENAME = ] EVT.FIELD or:: [ CAPTURENAME = ] EVT.$name ``CAPTURENAME`` Custom name for this capture. The syntax of this name is the same as a C identifier. If this part is omitted, the literal expression used for ``EVT.FIELD`` is used. ``EVT`` and ``FIELD`` See `Matching expression`_. Period select and aggregate parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ With ``lttng-periodlog``, it is possible to see the list of periods in the context of their parent. By specifying the ``--aggregate-by``, the lines in the log present on the same line the timerange of the period specified by the ``--select`` argument at the timerange of the parent period that contains it. In ``lttng-periodstats`` and ``lttng-periodfreq``, these two flags are used as filter to limit the output to only the relevant periods. If omitted, all existing combinations of parent/child statistics and frequency distributions are output. Grouping ~~~~~~~~ When fields are captured during the period analyses, it is possible to compute the statistics and frequency distribution grouped by values of the these fields, instead of globally for the trace. The format is:: --group-by "PERIODNAME.CAPTURENAME[, PERIODNAME.CAPTURENAME]" If multiple values are passed, the analysis outputs one list of tables (statistics and/or frequency distribution) for each unique combination of the field's values. For example, if we track the ``open`` system call and we are interested in the average duration of this call by filename, we only have to capture the filename field and group the results by ``open.filename``. Examples ........ Begin captures only:: switch : $evt.next_tid, name = $evt.$name, msg_id = $parent.$begin.$evt.id Begin and end captures:: hello : beginning = $evt.$ctx.begin_ts, $evt.received_bytes : $evt.send_bytes, $evt.$name, begin = $begin.$evt.$ctx.begin_ts end = $evt.$ctx.end_ts Top scheduling latency (delay between ``sched_waking(tid=$TID)`` and ``sched_switch(next_tid=$TID)``) with recording of the procname of the waker (dependant of the ``procname`` context in the trace), priority and target CPU: .. code-block:: bash lttng-periodtop /path/to/trace \ --period 'wake : $evt.$name == "sched_waking" : $evt.$name == "sched_switch" && $evt.next_tid == $begin.$evt.$payload.tid' \ --period-capture 'wake : waker = $evt.procname, prio = $evt.prio : wakee = $evt.next_comm, cpu = $evt.cpu_id' :: Timerange: [2016-07-21 17:07:47.832234248, 2016-07-21 17:07:48.948152659] Period top Begin End Duration (us) Name Begin capture End capture [17:07:47.835338581, 17:07:47.946834976] 111496.395 wake waker = lttng-consumerd wakee = kworker/0:2 prio = 20 cpu = 0 [17:07:47.850409057, 17:07:47.946829256] 96420.199 wake waker = swapper/2 wakee = migration/0 prio = -100 cpu = 0 [17:07:48.300313282, 17:07:48.300993892] 680.610 wake waker = Xorg wakee = ibus-ui-gtk3 prio = 20 cpu = 3 [17:07:48.300330060, 17:07:48.300920648] 590.588 wake waker = Xorg wakee = ibus-x11 prio = 20 cpu = 3 Log of all the IRQ handled while a user-space process was running, capture the procname of the process interrupted, the name and number of the IRQ: .. code-block:: bash lttng-periodlog /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" && $evt.next_tid != 0 : $evt.$name == "sched_switch" && $evt.prev_tid == $begin.$evt.next_tid && $evt.cpu_id == $begin.$evt.cpu_id' \ --period 'irq(switch) : $evt.$name == "irq_handler_entry" && $evt.cpu_id == $parent.$begin.$evt.cpu_id : $evt.$name == "irq_handler_exit" && $evt.cpu_id == $begin.$evt.cpu_id' \ --period-capture 'irq : name = $evt.name, irq = $evt.irq, current = $parent.$begin.$evt.next_comm' :: Period log Begin End Duration (us) Name Begin capture End capture [10:58:26.169238875, 10:58:26.169244920] 6.045 switch [10:58:26.169598385, 10:58:26.169602967] 4.582 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.169811553, 10:58:26.169816218] 4.665 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.170025600, 10:58:26.170030197] 4.597 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.169236842, 10:58:26.170105711] 868.869 switch Log of all the ``open`` system call periods aggregated by the ``sched_switch`` in which they occurred: .. code-block:: bash lttng-periodlog /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" : $evt.$name == "sched_switch" && $begin.$evt.next_tid == $evt.prev_tid && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'open(switch) : $evt.$name == "syscall_entry_open" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "syscall_exit_open" && $begin.$evt.cpu_id == $evt.cpu_id' \ --period-captures 'switch : comm = $evt.next_comm, cpu = $evt.cpu_id, tid = $evt.next_tid' \ --period-captures 'open : filename = $evt.filename : fd = $evt.ret' \ --select open --aggregate-by switch :: Aggregated log Aggregation of (open) by switch Parent | | Durations (us) | Begin End Duration (us) Name | Child name Count | Min Avg Max Stdev Runtime | Parent captures [10:58:26.222823677, 10:58:26.224039381] 1215.704 switch | switch/open 3 | 7.517 9.548 11.248 1.887 28.644 | switch.comm = bash, switch.cpu = 3, switch.tid = 12420 [10:58:26.856224058, 10:58:26.856589867] 365.809 switch | switch/open 1 | 77.620 77.620 77.620 ? 77.620 | switch.comm = ntpd, switch.cpu = 0, switch.tid = 11132 [10:58:27.000068031, 10:58:27.000954859] 886.828 switch | switch/open 15 | 9.224 16.126 37.190 6.681 241.894 | switch.comm = irqbalance, switch.cpu = 0, switch.tid = 1656 [10:58:27.225474282, 10:58:27.229160014] 3685.732 switch | switch/open 22 | 5.797 6.767 9.308 0.972 148.881 | switch.comm = bash, switch.cpu = 1, switch.tid = 12421 Statistics about the memory allocation performed within an ``open`` system call within a single ``sched_switch`` (no blocking or preemption): .. code-block:: bash lttng-periodstats /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" : $evt.$name == "sched_switch" && $begin.$evt.next_tid == $evt.prev_tid && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'open(switch) : $evt.$name == "syscall_entry_open" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "syscall_exit_open" && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'alloc(open) : $evt.$name == "kmem_cache_alloc" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "kmem_cache_free" && $evt.ptr == $begin.$evt.ptr' \ --period-captures 'switch : comm = $evt.next_comm, cpu = $evt.cpu_id, tid = $evt.next_tid' \ --period-captures 'open : filename = $evt.filename : fd = $evt.ret' \ --period-captures 'alloc : ptr = $evt.ptr' :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Period tree: switch |-- open |-- alloc Period statistics (us) Period Count Min Avg Max Stdev Runtime switch 831 2.824 5233.363 172056.802 16197.531 4348924.614 switch/open 41 5.797 12.123 77.620 12.076 497.039 switch/open/alloc 44 1.152 10.277 74.476 11.582 452.175 Per-parent period duration statistics (us) With active children Period Parent Min Avg Max Stdev switch/open switch 28.644 124.260 241.894 92.667 switch/open/alloc switch 24.036 113.044 229.713 87.827 switch/open/alloc switch/open 4.550 11.029 74.476 11.768 Per-parent duration ratio (%) With active children Period Parent Min Avg Max Stdev switch/open switch 2 13.723 27 12.421 switch/open/alloc switch 1 12.901 25 12.041 switch/open/alloc switch/open 76 88.146 115 7.529 Per-parent period count statistics With active children Period Parent Min Avg Max Stdev switch/open switch 1 10.250 22 9.979 switch/open/alloc switch 1 11.000 22 10.551 switch/open/alloc switch/open 1 1.073 2 0.264 Per-parent period duration statistics (us) Globally Period Parent Min Avg Max Stdev switch/open switch 0.000 0.598 241.894 10.251 switch/open/alloc switch 0.000 0.544 229.713 9.443 switch/open/alloc switch/open 4.550 11.029 74.476 11.768 Per-parent duration ratio (%) Globally Period Parent Min Avg Max Stdev switch/open switch 0 0.066 27 1.209 switch/open/alloc switch 0 0.062 25 1.150 switch/open/alloc switch/open 76 88.146 115 7.529 Per-parent period count statistics Globally Period Parent Min Avg Max Stdev switch/open switch 0 0.049 22 0.929 switch/open/alloc switch 0 0.053 22 0.991 switch/open/alloc switch/open 1 1.073 2 0.264 These statistics can also be scoped by value of the FD returned by the ``open`` system, by appending ``--group-by "open.fd"`` to the previous command line. That way previous tables will be output for each value of FD returned, so it is possible to observe the behaviour based on the parameters of a system call. Using the ``lttng-periodfreq`` or the ``--freq`` parameter, these tables can also be presented as frequency distributions. Progress options ---------------- If the `progressbar `_ optional dependency is installed, a progress bar is available to indicate the progress of the analysis. By default, the progress bar is based on the current event's timestamp. Progress options are: .. list-table:: Available progress command-line options :header-rows: 1 * - Command-line option - Description * - ``--no-progress`` - Disable the progress bar. * - ``--progress-use-size`` - Use the approximate event size instead of the current event's timestamp to estimate the progress value. Machine interface ----------------- If you want to display LTTng analyses results in a custom viewer, you can use the JSON-based LTTng analyses machine interface (LAMI). Each command in the previous table has its corresponding LAMI version with the ``-mi`` suffix. For example, the LAMI version of ``lttng-cputop`` is ``lttng-cputop-mi``. This version of LTTng analyses conforms to `LAMI 1.0 `_. Examples ======== This section shows a few examples of using some LTTng analyses. I/O --- Partition and system call latency statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencystats /path/to/trace :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Syscalls latency statistics (usec): Type Count Min Average Max Stdev ----------------------------------------------------------------------------------------- Open 45 5.562 13.835 77.683 15.263 Read 109 0.316 5.774 62.569 9.277 Write 101 0.256 7.060 48.531 8.555 Sync 207 19.384 40.664 160.188 21.201 Disk latency statistics (usec): Name Count Min Average Max Stdev ----------------------------------------------------------------------------------------- dm-0 108 0.001 0.004 0.007 1.306 I/O request latency distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencyfreq /path/to/trace :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Open latency distribution (usec) ############################################################################### 5.562 ███████████████████████████████████████████████████████████████████ 25 9.168 ██████████ 4 12.774 █████████████████████ 8 16.380 ████████ 3 19.986 █████ 2 23.592 0 27.198 0 30.804 0 34.410 ██ 1 38.016 0 41.623 0 45.229 0 48.835 0 52.441 0 56.047 0 59.653 0 63.259 0 66.865 0 70.471 0 74.077 █████ 2 Top system call latencies ~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencytop /path/to/trace --limit=3 --minsize=2 :: Checking the trace for lost events... Timerange: [2015-01-15 12:18:37.216484041, 2015-01-15 12:18:53.821580313] Top open syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:50.432950815,12:18:50.870648568] open 437697.753 N/A apache2 31517 /var/lib/php5/sess_0ifir2hangm8ggaljdphl9o5b5 (fd=13) [12:18:52.946080165,12:18:52.946132278] open 52.113 N/A apache2 31588 /var/lib/php5/sess_mr9045p1k55vin1h0vg7rhgd63 (fd=13) [12:18:46.800846035,12:18:46.800874916] open 28.881 N/A apache2 31591 /var/lib/php5/sess_r7c12pccfvjtas15g3j69u14h0 (fd=13) [12:18:51.389797604,12:18:51.389824426] open 26.822 N/A apache2 31520 /var/lib/php5/sess_4sdb1rtjkhb78sabnoj8gpbl00 (fd=13) Top read syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:37.256073107,12:18:37.256555967] read 482.860 7.00 B bash 10237 unknown (origin not found) (fd=3) [12:18:52.000209798,12:18:52.000252304] read 42.506 1.00 KB irqbalance 1337 /proc/interrupts (fd=3) [12:18:37.256559439,12:18:37.256601615] read 42.176 5.00 B bash 10237 unknown (origin not found) (fd=3) [12:18:42.000281918,12:18:42.000320016] read 38.098 1.00 KB irqbalance 1337 /proc/interrupts (fd=3) Top write syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:49.913241516,12:18:49.915908862] write 2667.346 95.00 B apache2 31584 /var/log/apache2/access.log (fd=8) [12:18:37.472823631,12:18:37.472859836] writev 36.205 21.97 KB apache2 31544 unknown (origin not found) (fd=12) [12:18:37.991578372,12:18:37.991612724] writev 34.352 21.97 KB apache2 31589 unknown (origin not found) (fd=12) [12:18:39.547778549,12:18:39.547812515] writev 33.966 21.97 KB apache2 31584 unknown (origin not found) (fd=12) Top sync syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:50.162776739,12:18:51.157522361] sync 994745.622 N/A sync 22791 None (fd=None) [12:18:37.227867532,12:18:37.232289687] sync_file_range 4422.155 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) [12:18:37.238076585,12:18:37.239012027] sync_file_range 935.442 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) [12:18:37.220974711,12:18:37.221647124] sync_file_range 672.413 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) I/O operations log ~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolog /path/to/trace :: [10:58:26.221618530,10:58:26.221620659] write 2.129 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.221623609,10:58:26.221628055] read 4.446 50.00 B /usr/bin/x-term 11793 /dev/ptmx (fd=24) [10:58:26.221638929,10:58:26.221640008] write 1.079 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.221676232,10:58:26.221677385] read 1.153 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.223401804,10:58:26.223411683] open 9.879 N/A sleep 12420 /etc/ld.so.cache (fd=3) [10:58:26.223448060,10:58:26.223455577] open 7.517 N/A sleep 12420 /lib/x86_64-linux-gnu/libc.so.6 (fd=3) [10:58:26.223456522,10:58:26.223458898] read 2.376 832.00 B sleep 12420 /lib/x86_64-linux-gnu/libc.so.6 (fd=3) [10:58:26.223918068,10:58:26.223929316] open 11.248 N/A sleep 12420 (fd=3) [10:58:26.231881565,10:58:26.231895970] writev 14.405 16.00 B /usr/bin/x-term 11793 socket:[45650] (fd=4) [10:58:26.231979636,10:58:26.231988446] recvmsg 8.810 16.00 B Xorg 1827 socket:[47480] (fd=38) I/O usage top ~~~~~~~~~~~~~ .. code-block:: bash lttng-iousagetop /path/to/trace :: Timerange: [2014-10-07 16:36:00.733214969, 2014-10-07 16:36:18.804584183] Per-process I/O Read ############################################################################### ██████████████████████████████████████████████████ 16.00 MB lttng-consumerd (2619) 0 B file 4.00 B net 16.00 MB unknown █████ 1.72 MB lttng-consumerd (2619) 0 B file 0 B net 1.72 MB unknown █ 398.13 KB postgres (4219) 121.05 KB file 277.07 KB net 8.00 B unknown 256.09 KB postgres (1348) 0 B file 255.97 KB net 117.00 B unknown 204.81 KB postgres (4218) 204.81 KB file 0 B net 0 B unknown 123.77 KB postgres (4220) 117.50 KB file 6.26 KB net 8.00 B unknown Per-process I/O Write ############################################################################### ██████████████████████████████████████████████████ 16.00 MB lttng-consumerd (2619) 0 B file 8.00 MB net 8.00 MB unknown ██████ 2.20 MB postgres (4219) 2.00 MB file 202.23 KB net 0 B unknown █████ 1.73 MB lttng-consumerd (2619) 0 B file 887.73 KB net 882.58 KB unknown ██ 726.33 KB postgres (1165) 8.00 KB file 6.33 KB net 712.00 KB unknown 158.69 KB postgres (1168) 158.69 KB file 0 B net 0 B unknown 80.66 KB postgres (1348) 0 B file 80.66 KB net 0 B unknown Files Read ############################################################################### ██████████████████████████████████████████████████ 8.00 MB anon_inode:[lttng_stream] (lttng-consumerd) 'fd 32 in lttng-consumerd (2619)' █████ 834.41 KB base/16384/pg_internal.init 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' █ 256.09 KB socket:[8893] (postgres) 'fd 9 in postgres (1348)' █ 174.69 KB pg_stat_tmp/pgstat.stat 'fd 9 in postgres (4218)', 'fd 9 in postgres (1167)' 109.48 KB global/pg_internal.init 'fd 7 in postgres (4218)', 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' 104.30 KB base/11951/pg_internal.init 'fd 7 in postgres (4218)' 12.85 KB socket (lttng-sessiond) 'fd 30 in lttng-sessiond (384)' 4.50 KB global/pg_filenode.map 'fd 7 in postgres (4218)', 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' 4.16 KB socket (postgres) 'fd 9 in postgres (4226)' 4.00 KB /proc/interrupts 'fd 3 in irqbalance (1104)' Files Write ############################################################################### ██████████████████████████████████████████████████ 8.00 MB socket:[56371] (lttng-consumerd) 'fd 30 in lttng-consumerd (2619)' █████████████████████████████████████████████████ 8.00 MB pipe:[53306] (lttng-consumerd) 'fd 12 in lttng-consumerd (2619)' ██████████ 1.76 MB pg_xlog/00000001000000000000000B 'fd 31 in postgres (4219)' █████ 887.82 KB socket:[56369] (lttng-consumerd) 'fd 26 in lttng-consumerd (2619)' █████ 882.58 KB pipe:[53309] (lttng-consumerd) 'fd 18 in lttng-consumerd (2619)' 160.00 KB /var/lib/postgresql/9.1/main/base/16384/16602 'fd 14 in postgres (1165)' 158.69 KB pg_stat_tmp/pgstat.tmp 'fd 3 in postgres (1168)' 144.00 KB /var/lib/postgresql/9.1/main/base/16384/16613 'fd 12 in postgres (1165)' 88.00 KB /var/lib/postgresql/9.1/main/base/16384/16609 'fd 11 in postgres (1165)' 78.28 KB socket:[8893] (postgres) 'fd 9 in postgres (1348)' Block I/O Read ############################################################################### Block I/O Write ############################################################################### ██████████████████████████████████████████████████ 1.76 MB postgres (pid=4219) ████ 160.00 KB postgres (pid=1168) ██ 100.00 KB kworker/u8:0 (pid=1540) ██ 96.00 KB jbd2/vda1-8 (pid=257) █ 40.00 KB postgres (pid=1166) 8.00 KB kworker/u9:0 (pid=4197) 4.00 KB kworker/u9:2 (pid=1381) Disk nr_sector ############################################################################### ███████████████████████████████████████████████████████████████████ 4416.00 sectors vda1 Disk nr_requests ############################################################################### ████████████████████████████████████████████████████████████████████ 177.00 requests vda1 Disk request time/sector ############################################################################### ██████████████████████████████████████████████████████████████████ 0.01 ms vda1 Network recv_bytes ############################################################################### ███████████████████████████████████████████████████████ 739.50 KB eth0 █████ 80.27 KB lo Network sent_bytes ############################################################################### ████████████████████████████████████████████████████████ 9.36 MB eth0 System calls -------- Per-TID and global system call statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-syscallstats /path/to/trace :: Timerange: [2015-01-15 12:18:37.216484041, 2015-01-15 12:18:53.821580313] Per-TID syscalls statistics (usec) find (22785) Count Min Average Max Stdev Return values - getdents 14240 0.380 364.301 43372.450 1629.390 {'success': 14240} - close 14236 0.233 0.506 4.932 0.217 {'success': 14236} - fchdir 14231 0.252 0.407 5.769 0.117 {'success': 14231} - open 7123 0.779 2.321 12.697 0.936 {'success': 7119, 'ENOENT': 4} - newfstatat 7118 1.457 143.562 28103.532 1410.281 {'success': 7118} - openat 7118 1.525 2.411 9.107 0.771 {'success': 7118} - newfstat 7117 0.272 0.654 8.707 0.248 {'success': 7117} - write 573 0.298 0.715 8.584 0.391 {'success': 573} - brk 27 0.615 5.768 30.792 7.830 {'success': 27} - rt_sigaction 22 0.227 0.283 0.589 0.098 {'success': 22} - mmap 12 1.116 2.116 3.597 0.762 {'success': 12} - mprotect 6 1.185 2.235 3.923 1.148 {'success': 6} - read 5 0.925 2.101 6.300 2.351 {'success': 5} - ioctl 4 0.342 1.151 2.280 0.873 {'success': 2, 'ENOTTY': 2} - access 4 1.166 2.530 4.202 1.527 {'ENOENT': 4} - rt_sigprocmask 3 0.325 0.570 0.979 0.357 {'success': 3} - dup2 2 0.250 0.562 0.874 ? {'success': 2} - munmap 2 3.006 5.399 7.792 ? {'success': 2} - execve 1 7277.974 7277.974 7277.974 ? {'success': 1} - setpgid 1 0.945 0.945 0.945 ? {'success': 1} - fcntl 1 ? 0.000 0.000 ? {} - newuname 1 1.240 1.240 1.240 ? {'success': 1} Total: 71847 ----------------------------------------------------------------------------------------------------------------- apache2 (31517) Count Min Average Max Stdev Return values - fcntl 192 ? 0.000 0.000 ? {} - newfstat 156 0.237 0.484 1.102 0.222 {'success': 156} - read 144 0.307 1.602 16.307 1.698 {'success': 117, 'EAGAIN': 27} - access 96 0.705 1.580 3.364 0.670 {'success': 12, 'ENOENT': 84} - newlstat 84 0.459 0.738 1.456 0.186 {'success': 63, 'ENOENT': 21} - newstat 74 0.735 2.266 11.212 1.772 {'success': 50, 'ENOENT': 24} - lseek 72 0.317 0.522 0.915 0.112 {'success': 72} - close 39 0.471 0.615 0.867 0.069 {'success': 39} - open 36 2.219 12162.689 437697.753 72948.868 {'success': 36} - getcwd 28 0.287 0.701 1.331 0.277 {'success': 28} - poll 27 1.080 1139.669 2851.163 856.723 {'success': 27} - times 24 0.765 0.956 1.327 0.107 {'success': 24} - setitimer 24 0.499 5.848 16.668 4.041 {'success': 24} - write 24 5.467 6.784 16.827 2.459 {'success': 24} - writev 24 10.241 17.645 29.817 5.116 {'success': 24} - mmap 15 3.060 3.482 4.406 0.317 {'success': 15} - munmap 15 2.944 3.502 4.154 0.427 {'success': 15} - brk 12 0.738 4.579 13.795 4.437 {'success': 12} - chdir 12 0.989 1.600 2.353 0.385 {'success': 12} - flock 6 0.906 1.282 2.043 0.423 {'success': 6} - rt_sigaction 6 0.530 0.725 1.123 0.217 {'success': 6} - pwrite64 6 1.262 1.430 1.692 0.143 {'success': 6} - rt_sigprocmask 6 0.539 0.650 0.976 0.162 {'success': 6} - shutdown 3 7.323 8.487 10.281 1.576 {'success': 3} - getsockname 3 1.015 1.228 1.585 0.311 {'success': 3} - accept4 3 5174453.611 3450157.282 5176018.235 ? {'success': 2} Total: 1131 Interrupts ---------- Hardware and software interrupt statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-irqstats /path/to/trace :: Timerange: [2014-03-11 16:05:41.314824752, 2014-03-11 16:05:45.041994298] Hard IRQ Duration (us) count min avg max stdev ----------------------------------------------------------------------------------| 1: 30 10.901 45.500 64.510 18.447 | 42: 259 3.203 7.863 21.426 3.183 | 43: 2 3.859 3.976 4.093 0.165 | 44: 92 0.300 3.995 6.542 2.181 | Soft IRQ Duration (us) Raise latency (us) count min avg max stdev | count min avg max stdev ----------------------------------------------------------------------------------|------------------------------------------------------------ 1: 495 0.202 21.058 51.060 11.047 | 53 2.141 11.217 20.005 7.233 3: 14 0.133 9.177 32.774 10.483 | 14 0.763 3.703 10.902 3.448 4: 257 5.981 29.064 125.862 15.891 | 257 0.891 3.104 15.054 2.046 6: 26 0.309 1.198 1.748 0.329 | 26 9.636 39.222 51.430 11.246 7: 299 1.185 14.768 90.465 15.992 | 298 1.286 31.387 61.700 11.866 9: 338 0.592 3.387 13.745 1.356 | 147 2.480 29.299 64.453 14.286 Interrupt handler duration frequency distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-irqfreq --timerange=[16:05:42,16:05:45] --irq=44 --stats /path/to/trace :: Timerange: [2014-03-11 16:05:42.042034570, 2014-03-11 16:05:44.998914297] Hard IRQ Duration (us) count min avg max stdev ----------------------------------------------------------------------------------| 44: 72 0.300 4.018 6.542 2.164 | Frequency distribution iwlwifi (44) ############################################################################### 0.300 █████ 1.00 0.612 ██████████████████████████████████████████████████████████████ 12.00 0.924 ████████████████████ 4.00 1.236 ██████████ 2.00 1.548 0.00 1.861 █████ 1.00 2.173 0.00 2.485 █████ 1.00 2.797 ██████████████████████████ 5.00 3.109 █████ 1.00 3.421 ███████████████ 3.00 3.733 0.00 4.045 █████ 1.00 4.357 █████ 1.00 4.669 ██████████ 2.00 4.981 ██████████ 2.00 5.294 █████████████████████████████████████████ 8.00 5.606 ████████████████████████████████████████████████████████████████████ 13.00 5.918 ██████████████████████████████████████████████████████████████ 12.00 6.230 ███████████████ 3.00 Community ========= LTTng analyses is part of the `LTTng `_ project and shares its community. We hope you have fun trying this project and please remember it is a work in progress; feedback, bug reports and improvement ideas are always welcome! .. list-table:: LTTng analyses project's communication channels :header-rows: 1 * - Item - Location - Notes * - Mailing list - `lttng-dev `_ (``lttng-dev@lists.lttng.org``) - Preferably, use the ``[lttng-analyses]`` subject prefix * - IRC - ``#lttng`` on the OFTC network - * - Code contribution - Create a new GitHub `pull request `_ - * - Bug reporting - Create a new GitHub `issue `_ - * - Continuous integration - `lttng-analyses_master_build item `_ on LTTng's CI and `lttng/lttng-analyses project `_ on Travis CI - * - Blog - The `LTTng blog `_ contains some posts about LTTng analyses - Keywords: lttng tracing Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Developers Classifier: Intended Audience :: System Administrators Classifier: Topic :: System :: Monitoring Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python :: 3.4 lttnganalyses-0.6.1/lttng-iolatencytop0000775000175000017500000000235412553274232021655 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import io if __name__ == '__main__': io.runlatencytop() lttnganalyses-0.6.1/tests/0000775000175000017500000000000013033742625017225 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/tests/common/0000775000175000017500000000000013033742625020515 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/tests/common/utils.py0000664000175000017500000000373012726625546022244 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import os import time class TimezoneUtils(): def __init__(self): self.original_tz = None def set_up_timezone(self): # Make sure that the local timezone as seen by the time module # is the same regardless of where the test is actually # run. US/Eastern was picked arbitrarily. self.original_tz = os.environ.get('TZ') os.environ['TZ'] = 'US/Eastern' try: time.tzset() except AttributeError: print('Warning: time.tzset() is unavailable on Windows.' 'This may cause test failures.') def tear_down_timezone(self): # Restore the original value of TZ if any, else delete it from # the environment variables. if self.original_tz: os.environ['TZ'] = self.original_tz else: del os.environ['TZ'] lttnganalyses-0.6.1/tests/common/test_format_utils.py0000664000175000017500000001662412723101552024640 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import unittest from lttnganalyses.core import stats from lttnganalyses.common import format_utils from .utils import TimezoneUtils class TestFormatSize(unittest.TestCase): def test_negative(self): self.assertRaises(ValueError, format_utils.format_size, -1) def test_zero(self): result = format_utils.format_size(0) result_decimal = format_utils.format_size(0, binary_prefix=False) self.assertEqual(result, '0 B') self.assertEqual(result_decimal, '0 B') def test_huge(self): # 2000 YiB or 2475.88 YB huge_value = 2417851639229258349412352000 result = format_utils.format_size(huge_value) result_decimal = format_utils.format_size(huge_value, binary_prefix=False) self.assertEqual(result, '2000.00 YiB') self.assertEqual(result_decimal, '2417.85 YB') def test_reasonable(self): # 2 GB or 1.86 GiB reasonable_value = 2000000000 result = format_utils.format_size(reasonable_value) result_decimal = format_utils.format_size(reasonable_value, binary_prefix=False) self.assertEqual(result, '1.86 GiB') self.assertEqual(result_decimal, '2.00 GB') class TestFormatPrioList(unittest.TestCase): def test_empty(self): prio_list = [] result = format_utils.format_prio_list(prio_list) self.assertEqual(result, '[]') def test_one_prio(self): prio_list = [stats.PrioEvent(0, 0)] result = format_utils.format_prio_list(prio_list) self.assertEqual(result, '[0]') def test_multiple_prios(self): prio_list = [stats.PrioEvent(0, 0), stats.PrioEvent(0, 1)] result = format_utils.format_prio_list(prio_list) self.assertEqual(result, '[0, 1]') def test_repeated_prio(self): prio_list = [stats.PrioEvent(0, 0), stats.PrioEvent(0, 0)] result = format_utils.format_prio_list(prio_list) self.assertEqual(result, '[0 (2 times)]') def test_repeated_prios(self): prio_list = [ stats.PrioEvent(0, 0), stats.PrioEvent(0, 1), stats.PrioEvent(0, 0), stats.PrioEvent(0, 1) ] result = format_utils.format_prio_list(prio_list) self.assertEqual(result, '[0 (2 times), 1 (2 times)]') class TestFormatTimestamp(unittest.TestCase): # This may or may not be the time of the Linux 0.0.1 announcement. ARBITRARY_TIMESTAMP = 683153828123456789 def setUp(self): self.tz_utils = TimezoneUtils() self.tz_utils.set_up_timezone() def tearDown(self): self.tz_utils.tear_down_timezone() def test_time(self): result = format_utils.format_timestamp(self.ARBITRARY_TIMESTAMP) result_gmt = format_utils.format_timestamp( self.ARBITRARY_TIMESTAMP, gmt=True ) self.assertEqual(result, '16:57:08.123456789') self.assertEqual(result_gmt, '20:57:08.123456789') def test_date(self): result = format_utils.format_timestamp( self.ARBITRARY_TIMESTAMP, print_date=True ) result_gmt = format_utils.format_timestamp( self.ARBITRARY_TIMESTAMP, print_date=True, gmt=True ) self.assertEqual(result, '1991-08-25 16:57:08.123456789') self.assertEqual(result_gmt, '1991-08-25 20:57:08.123456789') def test_negative(self): # Make sure the time module handles pre-epoch dates correctly result = format_utils.format_timestamp( -self.ARBITRARY_TIMESTAMP, print_date=True ) result_gmt = format_utils.format_timestamp( -self.ARBITRARY_TIMESTAMP, print_date=True, gmt=True ) self.assertEqual(result, '1948-05-08 23:02:51.876543211') self.assertEqual(result_gmt, '1948-05-09 03:02:51.876543211') class TestFormatTimeRange(unittest.TestCase): BEGIN_TS = 683153828123456789 # 1 hour later END_TS = 683157428123456789 def _mock_format_timestamp(self, timestamp, print_date, gmt): date_str = '1991-08-25 ' if timestamp == TestFormatTimeRange.BEGIN_TS: if gmt: time_str = '20:57:08.123456789' else: time_str = '16:57:08.123456789' elif timestamp == TestFormatTimeRange.END_TS: if gmt: time_str = '21:57:08.123456789' else: time_str = '17:57:08.123456789' if print_date: return date_str + time_str else: return time_str def setUp(self): self._original_format_timestamp = format_utils.format_timestamp format_utils.format_timestamp = self._mock_format_timestamp def tearDown(self): format_utils.format_timestamp = self._original_format_timestamp def test_time_only(self): result = format_utils.format_time_range( self.BEGIN_TS, self.END_TS ) result_gmt = format_utils.format_time_range( self.BEGIN_TS, self.END_TS, gmt=True ) self.assertEqual(result, '[16:57:08.123456789, 17:57:08.123456789]') self.assertEqual(result_gmt, '[20:57:08.123456789, 21:57:08.123456789]') def test_print_date(self): result = format_utils.format_time_range( self.BEGIN_TS, self.END_TS, print_date=True ) result_gmt = format_utils.format_time_range( self.BEGIN_TS, self.END_TS, print_date=True, gmt=True ) self.assertEqual( result, '[1991-08-25 16:57:08.123456789, 1991-08-25 17:57:08.123456789]' ) self.assertEqual( result_gmt, '[1991-08-25 20:57:08.123456789, 1991-08-25 21:57:08.123456789]' ) class TestFormatIpv4(unittest.TestCase): IP_INTEGER = 0x7f000001 IP_SEQUENCE = [127, 0, 0, 1] def test_integer(self): result = format_utils.format_ipv4(self.IP_INTEGER) self.assertEqual(result, '127.0.0.1') def test_sequence(self): result = format_utils.format_ipv4(self.IP_SEQUENCE) self.assertEqual(result, '127.0.0.1') def test_with_port(self): result = format_utils.format_ipv4(self.IP_SEQUENCE, port=8080) self.assertEqual(result, '127.0.0.1:8080') lttnganalyses-0.6.1/tests/common/test_parse_utils.py0000664000175000017500000003205712723101552024460 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import datetime import unittest from lttnganalyses.common import parse_utils from .utils import TimezoneUtils # Mock of babeltrace's TraceCollection, used to test date methods class TraceCollection(): def __init__(self, begin_ts, end_ts): self.begin_ts = begin_ts self.end_ts = end_ts @property def timestamp_begin(self): return self.begin_ts @property def timestamp_end(self): return self.end_ts class TestParseSize(unittest.TestCase): def test_garbage(self): self.assertRaises(ValueError, parse_utils.parse_size, 'ceci n\'est pas une size') self.assertRaises(ValueError, parse_utils.parse_size, '12.34.56') def test_invalid_units(self): self.assertRaises(ValueError, parse_utils.parse_size, '500 furlongs') def test_binary_units(self): result = parse_utils.parse_size('500 KiB') self.assertEqual(result, 512000) result = parse_utils.parse_size('-500 KiB') self.assertEqual(result, -512000) # no space left between units and value is intentional result = parse_utils.parse_size('0.01MiB') self.assertEqual(result, 10485) result = parse_utils.parse_size('1200 YiB') self.assertEqual(result, 1450710983537555009647411200) result = parse_utils.parse_size('1234 B') self.assertEqual(result, 1234) def test_coreutils_units(self): result = parse_utils.parse_size('500 K') self.assertEqual(result, 512000) result = parse_utils.parse_size('-500 K') self.assertEqual(result, -512000) # no space left between units and value is intentional result = parse_utils.parse_size('0.01M') self.assertEqual(result, 10485) result = parse_utils.parse_size('1200 Y') self.assertEqual(result, 1450710983537555009647411200) def test_si_units(self): result = parse_utils.parse_size('500 KB') self.assertEqual(result, 500000) result = parse_utils.parse_size('-500 KB') self.assertEqual(result, -500000) # no space left between units and value is intentional result = parse_utils.parse_size('0.01MB') self.assertEqual(result, 10000) result = parse_utils.parse_size('40 ZB') self.assertEqual(result, 40000000000000000000000) # Sizes a bit larger than 40 ZB (e.g. 50 ZB and up) with # decimal units don't get parsed quite as precisely because of # the nature of floating point numbers. If precision is needed # for larger values with these units, it could be fixed, but # for now it seems unlikely so we leave it as is def test_no_units(self): result = parse_utils.parse_size('1234') self.assertEqual(result, 1234) result = parse_utils.parse_size('1234.567') self.assertEqual(result, 1234) result = parse_utils.parse_size('-1234.567') self.assertEqual(result, -1234) class TestParseDuration(unittest.TestCase): def test_garbage(self): self.assertRaises(ValueError, parse_utils.parse_duration, 'ceci n\'est pas une duration') self.assertRaises(ValueError, parse_utils.parse_duration, '12.34.56') def test_invalid_units(self): self.assertRaises(ValueError, parse_utils.parse_duration, '500 furlongs') def test_valid_units(self): result = parse_utils.parse_duration('1s') self.assertEqual(result, 1000000000) result = parse_utils.parse_duration('-1s') self.assertEqual(result, -1000000000) result = parse_utils.parse_duration('1234.56 ms') self.assertEqual(result, 1234560000) result = parse_utils.parse_duration('1.23 us') self.assertEqual(result, 1230) result = parse_utils.parse_duration('1.23 µs') self.assertEqual(result, 1230) result = parse_utils.parse_duration('1234 ns') self.assertEqual(result, 1234) result = parse_utils.parse_duration('0.001 ns') self.assertEqual(result, 0) def test_no_units(self): result = parse_utils.parse_duration('1234.567') self.assertEqual(result, 1234567000000) class TestParseDate(unittest.TestCase): def setUp(self): self.tz_utils = TimezoneUtils() self.tz_utils.set_up_timezone() def tearDown(self): self.tz_utils.tear_down_timezone() def test_parse_full_date_nsec(self): date_expected = datetime.datetime(2014, 12, 12, 17, 29, 43) nsec_expected = 802588035 date, nsec = parse_utils.parse_date('2014-12-12 17:29:43.802588035') self.assertEqual(date, date_expected) self.assertEqual(nsec, nsec_expected) date, nsec = parse_utils.parse_date('2014-12-12T17:29:43.802588035') self.assertEqual(date, date_expected) self.assertEqual(nsec, nsec_expected) def test_parse_full_date(self): date_expected = datetime.datetime(2014, 12, 12, 17, 29, 43) nsec_expected = 0 date, nsec = parse_utils.parse_date('2014-12-12 17:29:43') self.assertEqual(date, date_expected) self.assertEqual(nsec, nsec_expected) date, nsec = parse_utils.parse_date('2014-12-12T17:29:43') self.assertEqual(date, date_expected) self.assertEqual(nsec, nsec_expected) def test_parse_time_nsec(self): time_expected = datetime.time(17, 29, 43) nsec_expected = 802588035 time, nsec = parse_utils.parse_date('17:29:43.802588035') self.assertEqual(time, time_expected) self.assertEqual(nsec, nsec_expected) def test_parse_time(self): time_expected = datetime.time(17, 29, 43) nsec_expected = 0 time, nsec = parse_utils.parse_date('17:29:43') self.assertEqual(time, time_expected) self.assertEqual(nsec, nsec_expected) def test_parse_timestamp(self): time_expected = datetime.datetime(2014, 12, 12, 17, 29, 43) nsec_expected = 802588035 date, nsec = parse_utils.parse_date('1418423383802588035') self.assertEqual(date, time_expected) self.assertEqual(nsec, nsec_expected) def test_parse_date_invalid(self): self.assertRaises(ValueError, parse_utils.parse_date, 'ceci n\'est pas une date') class TestParseTraceCollectionDate(unittest.TestCase): DATE_FULL = '2014-12-12 17:29:43' DATE_TIME = '17:29:43' SINGLE_DAY_COLLECTION = TraceCollection( 1418423383802588035, 1418423483802588035 ) MULTI_DAY_COLLECTION = TraceCollection( 1418423383802588035, 1419423383802588035 ) def _mock_parse_date(self, date): if date == self.DATE_FULL: return (datetime.datetime(2014, 12, 12, 17, 29, 43), 0) elif date == self.DATE_TIME: return (datetime.time(17, 29, 43), 0) else: raise ValueError('Unrecognised date format: {}'.format(date)) def setUp(self): self.tz_utils = TimezoneUtils() self.tz_utils.set_up_timezone() self._original_parse_date = parse_utils.parse_date parse_utils.parse_date = self._mock_parse_date def tearDown(self): self.tz_utils.tear_down_timezone() parse_utils.parse_date = self._original_parse_date def test_invalid_date(self): self.assertRaises( ValueError, parse_utils.parse_trace_collection_date, self.SINGLE_DAY_COLLECTION, 'ceci n\'est pas une date' ) def test_single_day_date(self): expected = 1418423383000000000 result = parse_utils.parse_trace_collection_date( self.SINGLE_DAY_COLLECTION, self.DATE_FULL ) self.assertEqual(result, expected) def test_single_day_time(self): expected = 1418423383000000000 result = parse_utils.parse_trace_collection_date( self.SINGLE_DAY_COLLECTION, self.DATE_TIME ) self.assertEqual(result, expected) def test_multi_day_date(self): expected = 1418423383000000000 result = parse_utils.parse_trace_collection_date( self.MULTI_DAY_COLLECTION, self.DATE_FULL ) self.assertEqual(result, expected) def test_multi_day_time(self): self.assertRaises( ValueError, parse_utils.parse_trace_collection_date, self.MULTI_DAY_COLLECTION, self.DATE_TIME ) class TestParseTraceCollectionTimeRange(unittest.TestCase): DATE_FULL_BEGIN = '2014-12-12 17:29:43' DATE_FULL_END = '2014-12-12 17:29:44' DATE_TIME_BEGIN = '17:29:43' DATE_TIME_END = '17:29:44' EXPECTED_BEGIN = 1418423383000000000 EXPECTED_END = 1418423384000000000 SINGLE_DAY_COLLECTION = TraceCollection( 1418423383802588035, 1418423483802588035 ) MULTI_DAY_COLLECTION = TraceCollection( 1418423383802588035, 1419423383802588035 ) TIME_RANGE_FMT = '[{}, {}]' def _mock_parse_trace_collection_date(self, collection, date, gmt=False, handles=None): if collection == self.SINGLE_DAY_COLLECTION: if date == self.DATE_FULL_BEGIN or date == self.DATE_TIME_BEGIN: timestamp = 1418423383000000000 elif date == self.DATE_FULL_END or date == self.DATE_TIME_END: timestamp = 1418423384000000000 else: raise ValueError('Unrecognised date format: {}'.format(date)) elif collection == self.MULTI_DAY_COLLECTION: if date == self.DATE_FULL_BEGIN: timestamp = 1418423383000000000 elif date == self.DATE_FULL_END: timestamp = 1418423384000000000 elif date == self.DATE_TIME_BEGIN or date == self.DATE_TIME_END: raise ValueError( 'Invalid date format for multi-day trace: {}'.format(date) ) else: raise ValueError('Unrecognised date format: {}'.format(date)) return timestamp def setUp(self): self._original_parse_trace_collection_date = ( parse_utils.parse_trace_collection_date ) parse_utils.parse_trace_collection_date = ( self._mock_parse_trace_collection_date ) def tearDown(self): parse_utils.parse_trace_collection_date = ( self._original_parse_trace_collection_date ) def test_invalid_format(self): self.assertRaises( ValueError, parse_utils.parse_trace_collection_time_range, self.SINGLE_DAY_COLLECTION, 'ceci n\'est pas un time range' ) def test_single_day_date(self): time_range = self.TIME_RANGE_FMT.format( self.DATE_FULL_BEGIN, self.DATE_FULL_END ) begin, end = parse_utils.parse_trace_collection_time_range( self.SINGLE_DAY_COLLECTION, time_range ) self.assertEqual(begin, self.EXPECTED_BEGIN) self.assertEqual(end, self.EXPECTED_END) def test_single_day_time(self): time_range = self.TIME_RANGE_FMT.format( self.DATE_TIME_BEGIN, self.DATE_TIME_END ) begin, end = parse_utils.parse_trace_collection_time_range( self.SINGLE_DAY_COLLECTION, time_range ) self.assertEqual(begin, self.EXPECTED_BEGIN) self.assertEqual(end, self.EXPECTED_END) def test_multi_day_date(self): time_range = self.TIME_RANGE_FMT.format( self.DATE_FULL_BEGIN, self.DATE_FULL_END ) begin, end = parse_utils.parse_trace_collection_time_range( self.MULTI_DAY_COLLECTION, time_range ) self.assertEqual(begin, self.EXPECTED_BEGIN) self.assertEqual(end, self.EXPECTED_END) def test_multi_day_time(self): time_range = self.TIME_RANGE_FMT.format( self.DATE_TIME_BEGIN, self.DATE_TIME_END ) self.assertRaises( ValueError, parse_utils.parse_trace_collection_time_range, self.MULTI_DAY_COLLECTION, time_range ) lttnganalyses-0.6.1/tests/common/test_trace_utils.py0000664000175000017500000000745712745424023024457 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import unittest from datetime import date from lttnganalyses.common import trace_utils from .utils import TimezoneUtils # Mock of babeltrace's TraceCollection, used to test date methods class TraceCollection(): def __init__(self, begin_ts, end_ts): self.begin_ts = begin_ts self.end_ts = end_ts @property def timestamp_begin(self): return self.begin_ts @property def timestamp_end(self): return self.end_ts class TestIsMultiDayTraceCollection(unittest.TestCase): def setUp(self): self.tz_utils = TimezoneUtils() self.tz_utils.set_up_timezone() def tearDown(self): self.tz_utils.tear_down_timezone() def test_same_day(self): begin_ts = 683153828123456789 # 1 hour later end_ts = 683157428123456789 collection = TraceCollection(begin_ts, end_ts) result = trace_utils.is_multi_day_trace_collection(collection) self.assertFalse(result) def test_different_day(self): begin_ts = 683153828123456789 # 24 hours later end_ts = 683240228123456789 collection = TraceCollection(begin_ts, end_ts) result = trace_utils.is_multi_day_trace_collection(collection) self.assertTrue(result) class TestGetTraceCollectionDate(unittest.TestCase): def setUp(self): self.tz_utils = TimezoneUtils() self.tz_utils.set_up_timezone() def tearDown(self): self.tz_utils.tear_down_timezone() def test_single_day(self): begin_ts = 683153828123456789 # 1 hour later end_ts = 683157428123456789 collection = TraceCollection(begin_ts, end_ts) result = trace_utils.get_trace_collection_date(collection) expected = date(1991, 8, 25) self.assertEqual(result, expected) def test_multi_day(self): begin_ts = 683153828123456789 # 24 hours later end_ts = 683240228123456789 collection = TraceCollection(begin_ts, end_ts) self.assertRaises(ValueError, trace_utils.get_trace_collection_date, collection) class TestGetSyscallName(unittest.TestCase): class Event(): def __init__(self, name): self.name = name def test_sys(self): event = self.Event('sys_open') result = trace_utils.get_syscall_name(event) self.assertEqual(result, 'open') def test_syscall_entry(self): event = self.Event('syscall_entry_open') result = trace_utils.get_syscall_name(event) self.assertEqual(result, 'open') def test_not_syscall(self): event = self.Event('whatever') self.assertRaises(ValueError, trace_utils.get_syscall_name, event) lttnganalyses-0.6.1/tests/common/__init__.py0000664000175000017500000000217012723101501022611 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. lttnganalyses-0.6.1/tests/integration/0000775000175000017500000000000013033742625021550 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/tests/integration/analysis_test.py0000664000175000017500000000766413033742515025017 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import os import subprocess import unittest import locale from .trace_writer import TraceWriter class AnalysisTest(unittest.TestCase): COMMON_OPTIONS = '--no-color --no-progress --skip-validation --gmt' def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.rm_trace = True def set_up_class(self): dirname = os.path.dirname(os.path.realpath(__file__)) self.data_path = dirname + '/expected/' self.maxDiff = None self.trace_writer = TraceWriter() self.write_trace() def tear_down_class(self): if self.rm_trace: self.trace_writer.rm_trace() def write_trace(self): raise NotImplementedError def run(self, result=None): self.set_up_class() super().run(result) self.tear_down_class() return result def get_expected_output(self, test_name): expected_path = os.path.join(self.data_path, test_name + '.txt') with open(expected_path, 'r', encoding='utf-8') as expected_file: return expected_file.read() def _test_locale(self, locale_name): try: locale.setlocale(locale.LC_ALL, locale_name) return True except locale.Error: return False def _get_utf8_locale(self): # Test the two most common UTF-8 locales if self._test_locale('C.UTF-8'): return 'C.UTF-8' if self._test_locale('en_US.UTF-8'): return 'en_US.UTF-8' print('No supported UTF-8 locale detected') raise NameError def get_cmd_output(self, exec_name, options=''): cmd_fmt = './{} {} {} {}' cmd = cmd_fmt.format(exec_name, self.COMMON_OPTIONS, options, self.trace_writer.trace_root) # Create an utf-8 test env test_locale = self._get_utf8_locale() test_env = os.environ.copy() test_env['LC_ALL'] = test_locale process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, env=test_env) output, unused_err = process.communicate() output = output.decode('utf-8') if output[-1:] == '\n': output = output[:-1] return output def save_test_result(self, result, test_name): result_path = os.path.join(self.trace_writer.trace_root, test_name) with open(result_path, 'w', encoding='utf-8') as result_file: result_file.write(result) self.rm_trace = False def _assertMultiLineEqual(self, result, expected, test_name): try: self.assertMultiLineEqual(result, expected) except AssertionError: self.save_test_result(result, test_name) raise lttnganalyses-0.6.1/tests/integration/trace_writer.py0000664000175000017500000006047412723101501024612 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import sys import os import shutil import tempfile from babeltrace import CTFWriter, CTFStringEncoding class TraceWriter(): def __init__(self): self._trace_root = tempfile.mkdtemp() self.trace_path = os.path.join(self.trace_root, "kernel") self.create_writer() self.create_stream_class() self.define_base_types() self.define_events() self.create_stream() @property def trace_root(self): return self._trace_root def rm_trace(self): shutil.rmtree(self.trace_root) def flush(self): self.writer.flush_metadata() self.stream.flush() def create_writer(self): self.clock = CTFWriter.Clock("A_clock") self.clock.description = "Simple clock" self.writer = CTFWriter.Writer(self.trace_path) self.writer.add_clock(self.clock) self.writer.add_environment_field("Python_version", str(sys.version_info)) self.writer.add_environment_field("tracer_major", 2) self.writer.add_environment_field("tracer_minor", 8) self.writer.add_environment_field("tracer_patchlevel", 0) def create_stream_class(self): self.stream_class = CTFWriter.StreamClass("test_stream") self.stream_class.clock = self.clock def define_base_types(self): self.char8_type = CTFWriter.IntegerFieldDeclaration(8) self.char8_type.signed = True self.char8_type.encoding = CTFStringEncoding.UTF8 self.char8_type.alignment = 8 self.int16_type = CTFWriter.IntegerFieldDeclaration(16) self.int16_type.signed = True self.int16_type.alignment = 8 self.uint16_type = CTFWriter.IntegerFieldDeclaration(16) self.uint16_type.signed = False self.uint16_type.alignment = 8 self.int32_type = CTFWriter.IntegerFieldDeclaration(32) self.int32_type.signed = True self.int32_type.alignment = 8 self.uint32_type = CTFWriter.IntegerFieldDeclaration(32) self.uint32_type.signed = False self.uint32_type.alignment = 8 self.int64_type = CTFWriter.IntegerFieldDeclaration(64) self.int64_type.signed = True self.int64_type.alignment = 8 self.uint64_type = CTFWriter.IntegerFieldDeclaration(64) self.uint64_type.signed = False self.uint64_type.alignment = 8 self.array16_type = CTFWriter.ArrayFieldDeclaration(self.char8_type, 16) self.string_type = CTFWriter.StringFieldDeclaration() def add_event(self, event): event.add_field(self.uint32_type, "_cpu_id") self.stream_class.add_event_class(event) def define_sched_switch(self): self.sched_switch = CTFWriter.EventClass("sched_switch") self.sched_switch.add_field(self.array16_type, "_prev_comm") self.sched_switch.add_field(self.int32_type, "_prev_tid") self.sched_switch.add_field(self.int32_type, "_prev_prio") self.sched_switch.add_field(self.int64_type, "_prev_state") self.sched_switch.add_field(self.array16_type, "_next_comm") self.sched_switch.add_field(self.int32_type, "_next_tid") self.sched_switch.add_field(self.int32_type, "_next_prio") self.add_event(self.sched_switch) def define_softirq_raise(self): self.softirq_raise = CTFWriter.EventClass("softirq_raise") self.softirq_raise.add_field(self.uint32_type, "_vec") self.add_event(self.softirq_raise) def define_softirq_entry(self): self.softirq_entry = CTFWriter.EventClass("softirq_entry") self.softirq_entry.add_field(self.uint32_type, "_vec") self.add_event(self.softirq_entry) def define_softirq_exit(self): self.softirq_exit = CTFWriter.EventClass("softirq_exit") self.softirq_exit.add_field(self.uint32_type, "_vec") self.add_event(self.softirq_exit) def define_irq_handler_entry(self): self.irq_handler_entry = CTFWriter.EventClass("irq_handler_entry") self.irq_handler_entry.add_field(self.int32_type, "_irq") self.irq_handler_entry.add_field(self.string_type, "_name") self.add_event(self.irq_handler_entry) def define_irq_handler_exit(self): self.irq_handler_exit = CTFWriter.EventClass("irq_handler_exit") self.irq_handler_exit.add_field(self.int32_type, "_irq") self.irq_handler_exit.add_field(self.int32_type, "_ret") self.add_event(self.irq_handler_exit) def define_syscall_entry_write(self): self.syscall_entry_write = CTFWriter.EventClass("syscall_entry_write") self.syscall_entry_write.add_field(self.uint32_type, "_fd") self.syscall_entry_write.add_field(self.uint64_type, "_buf") self.syscall_entry_write.add_field(self.uint64_type, "_count") self.add_event(self.syscall_entry_write) def define_syscall_exit_write(self): self.syscall_exit_write = CTFWriter.EventClass("syscall_exit_write") self.syscall_exit_write.add_field(self.int64_type, "_ret") self.add_event(self.syscall_exit_write) def define_syscall_entry_read(self): self.syscall_entry_read = CTFWriter.EventClass("syscall_entry_read") self.syscall_entry_read.add_field(self.uint32_type, "_fd") self.syscall_entry_read.add_field(self.uint64_type, "_count") self.add_event(self.syscall_entry_read) def define_syscall_exit_read(self): self.syscall_exit_read = CTFWriter.EventClass("syscall_exit_read") self.syscall_exit_read.add_field(self.uint64_type, "_buf") self.syscall_exit_read.add_field(self.int64_type, "_ret") self.add_event(self.syscall_exit_read) def define_syscall_entry_open(self): self.syscall_entry_open = CTFWriter.EventClass("syscall_entry_open") self.syscall_entry_open.add_field(self.string_type, "_filename") self.syscall_entry_open.add_field(self.int32_type, "_flags") self.syscall_entry_open.add_field(self.uint16_type, "_mode") self.add_event(self.syscall_entry_open) def define_syscall_exit_open(self): self.syscall_exit_open = CTFWriter.EventClass("syscall_exit_open") self.syscall_exit_open.add_field(self.int64_type, "_ret") self.add_event(self.syscall_exit_open) def define_lttng_statedump_process_state(self): self.lttng_statedump_process_state = CTFWriter.EventClass( "lttng_statedump_process_state") self.lttng_statedump_process_state.add_field(self.int32_type, "_tid") self.lttng_statedump_process_state.add_field(self.int32_type, "_vtid") self.lttng_statedump_process_state.add_field(self.int32_type, "_pid") self.lttng_statedump_process_state.add_field(self.int32_type, "_vpid") self.lttng_statedump_process_state.add_field(self.int32_type, "_ppid") self.lttng_statedump_process_state.add_field(self.int32_type, "_vppid") self.lttng_statedump_process_state.add_field(self.array16_type, "_name") self.lttng_statedump_process_state.add_field(self.int32_type, "_type") self.lttng_statedump_process_state.add_field(self.int32_type, "_mode") self.lttng_statedump_process_state.add_field(self.int32_type, "_submode") self.lttng_statedump_process_state.add_field(self.int32_type, "_status") self.lttng_statedump_process_state.add_field(self.int32_type, "_ns_level") self.add_event(self.lttng_statedump_process_state) def define_lttng_statedump_file_descriptor(self): self.lttng_statedump_file_descriptor = CTFWriter.EventClass( "lttng_statedump_file_descriptor") self.lttng_statedump_file_descriptor.add_field(self.int32_type, "_pid") self.lttng_statedump_file_descriptor.add_field(self.int32_type, "_fd") self.lttng_statedump_file_descriptor.add_field(self.uint32_type, "_flags") self.lttng_statedump_file_descriptor.add_field(self.uint32_type, "_fmode") self.lttng_statedump_file_descriptor.add_field(self.string_type, "_filename") self.add_event(self.lttng_statedump_file_descriptor) def define_sched_wakeup(self): self.sched_wakeup = CTFWriter.EventClass("sched_wakeup") self.sched_wakeup.add_field(self.array16_type, "_comm") self.sched_wakeup.add_field(self.int32_type, "_tid") self.sched_wakeup.add_field(self.int32_type, "_prio") self.sched_wakeup.add_field(self.int32_type, "_success") self.sched_wakeup.add_field(self.int32_type, "_target_cpu") self.add_event(self.sched_wakeup) def define_sched_waking(self): self.sched_waking = CTFWriter.EventClass("sched_waking") self.sched_waking.add_field(self.array16_type, "_comm") self.sched_waking.add_field(self.int32_type, "_tid") self.sched_waking.add_field(self.int32_type, "_prio") self.sched_waking.add_field(self.int32_type, "_target_cpu") self.add_event(self.sched_waking) def define_block_rq_complete(self): self.block_rq_complete = CTFWriter.EventClass("block_rq_complete") self.block_rq_complete.add_field(self.uint32_type, "_dev") self.block_rq_complete.add_field(self.uint64_type, "_sector") self.block_rq_complete.add_field(self.uint32_type, "_nr_sector") self.block_rq_complete.add_field(self.int32_type, "_errors") self.block_rq_complete.add_field(self.uint32_type, "_rwbs") self.block_rq_complete.add_field(self.uint64_type, "__cmd_length") self.block_rq_complete.add_field(self.array16_type, "_cmd") self.add_event(self.block_rq_complete) def define_block_rq_issue(self): self.block_rq_issue = CTFWriter.EventClass("block_rq_issue") self.block_rq_issue.add_field(self.uint32_type, "_dev") self.block_rq_issue.add_field(self.uint64_type, "_sector") self.block_rq_issue.add_field(self.uint32_type, "_nr_sector") self.block_rq_issue.add_field(self.uint32_type, "_bytes") self.block_rq_issue.add_field(self.int32_type, "_tid") self.block_rq_issue.add_field(self.uint32_type, "_rwbs") self.block_rq_issue.add_field(self.uint64_type, "__cmd_length") self.block_rq_issue.add_field(self.array16_type, "_cmd") self.block_rq_issue.add_field(self.array16_type, "_comm") self.add_event(self.block_rq_issue) def define_net_dev_xmit(self): self.net_dev_xmit = CTFWriter.EventClass("net_dev_xmit") self.net_dev_xmit.add_field(self.uint64_type, "_skbaddr") self.net_dev_xmit.add_field(self.int32_type, "_rc") self.net_dev_xmit.add_field(self.uint32_type, "_len") self.net_dev_xmit.add_field(self.string_type, "_name") self.add_event(self.net_dev_xmit) def define_netif_receive_skb(self): self.netif_receive_skb = CTFWriter.EventClass("netif_receive_skb") self.netif_receive_skb.add_field(self.uint64_type, "_skbaddr") self.netif_receive_skb.add_field(self.uint32_type, "_len") self.netif_receive_skb.add_field(self.string_type, "_name") self.add_event(self.netif_receive_skb) def define_events(self): self.define_sched_switch() self.define_softirq_raise() self.define_softirq_entry() self.define_softirq_exit() self.define_irq_handler_entry() self.define_irq_handler_exit() self.define_syscall_entry_write() self.define_syscall_exit_write() self.define_syscall_entry_read() self.define_syscall_exit_read() self.define_syscall_entry_open() self.define_syscall_exit_open() self.define_lttng_statedump_process_state() self.define_lttng_statedump_file_descriptor() self.define_sched_wakeup() self.define_sched_waking() self.define_block_rq_complete() self.define_block_rq_issue() self.define_net_dev_xmit() self.define_netif_receive_skb() def create_stream(self): self.stream = self.writer.create_stream(self.stream_class) def set_char_array(self, event, string): if len(string) > 16: string = string[0:16] else: string = "%s" % (string + "\0" * (16 - len(string))) for i, char in enumerate(string): event.field(i).value = ord(char) def set_int(self, event, value): event.value = value def set_string(self, event, value): event.value = value def write_softirq_raise(self, time_ms, cpu_id, vec): event = CTFWriter.Event(self.softirq_raise) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_vec"), vec) self.stream.append_event(event) self.stream.flush() def write_softirq_entry(self, time_ms, cpu_id, vec): event = CTFWriter.Event(self.softirq_entry) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_vec"), vec) self.stream.append_event(event) self.stream.flush() def write_softirq_exit(self, time_ms, cpu_id, vec): event = CTFWriter.Event(self.softirq_exit) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_vec"), vec) self.stream.append_event(event) self.stream.flush() def write_irq_handler_entry(self, time_ms, cpu_id, irq, name): event = CTFWriter.Event(self.irq_handler_entry) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_irq"), irq) self.set_string(event.payload("_name"), name) self.stream.append_event(event) self.stream.flush() def write_irq_handler_exit(self, time_ms, cpu_id, irq, ret): event = CTFWriter.Event(self.irq_handler_exit) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_irq"), irq) self.set_int(event.payload("_ret"), ret) self.stream.append_event(event) self.stream.flush() def write_syscall_write(self, time_ms, cpu_id, delay, fd, buf, count, ret): event_entry = CTFWriter.Event(self.syscall_entry_write) self.clock.time = time_ms * 1000000 self.set_int(event_entry.payload("_cpu_id"), cpu_id) self.set_int(event_entry.payload("_fd"), fd) self.set_int(event_entry.payload("_buf"), buf) self.set_int(event_entry.payload("_count"), count) self.stream.append_event(event_entry) event_exit = CTFWriter.Event(self.syscall_exit_write) self.clock.time = (time_ms + delay) * 1000000 self.set_int(event_exit.payload("_cpu_id"), cpu_id) self.set_int(event_exit.payload("_ret"), ret) self.stream.append_event(event_exit) self.stream.flush() def write_syscall_read(self, time_ms, cpu_id, delay, fd, buf, count, ret): event_entry = CTFWriter.Event(self.syscall_entry_read) self.clock.time = time_ms * 1000000 self.set_int(event_entry.payload("_cpu_id"), cpu_id) self.set_int(event_entry.payload("_fd"), fd) self.set_int(event_entry.payload("_count"), count) self.stream.append_event(event_entry) event_exit = CTFWriter.Event(self.syscall_exit_read) self.clock.time = (time_ms + delay) * 1000000 self.set_int(event_exit.payload("_cpu_id"), cpu_id) self.set_int(event_exit.payload("_buf"), buf) self.set_int(event_exit.payload("_ret"), ret) self.stream.append_event(event_exit) self.stream.flush() def write_syscall_open(self, time_ms, cpu_id, delay, filename, flags, mode, ret): event = CTFWriter.Event(self.syscall_entry_open) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_string(event.payload("_filename"), filename) self.set_int(event.payload("_flags"), flags) self.set_int(event.payload("_mode"), mode) self.stream.append_event(event) self.stream.flush() event = CTFWriter.Event(self.syscall_exit_open) self.clock.time = (time_ms + delay) * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_ret"), ret) self.stream.append_event(event) self.stream.flush() def write_lttng_statedump_file_descriptor(self, time_ms, cpu_id, pid, fd, flags, fmode, filename): event = CTFWriter.Event(self.lttng_statedump_file_descriptor) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_pid"), pid) self.set_int(event.payload("_fd"), fd) self.set_int(event.payload("_flags"), flags) self.set_int(event.payload("_fmode"), fmode) self.set_string(event.payload("_filename"), filename) self.stream.append_event(event) self.stream.flush() def write_lttng_statedump_process_state(self, time_ms, cpu_id, tid, vtid, pid, vpid, ppid, vppid, name, type, mode, submode, status, ns_level): event = CTFWriter.Event(self.lttng_statedump_process_state) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_tid"), tid) self.set_int(event.payload("_vtid"), vtid) self.set_int(event.payload("_pid"), pid) self.set_int(event.payload("_vpid"), vpid) self.set_int(event.payload("_ppid"), ppid) self.set_int(event.payload("_vppid"), vppid) self.set_char_array(event.payload("_name"), name) self.set_int(event.payload("_type"), type) self.set_int(event.payload("_mode"), mode) self.set_int(event.payload("_submode"), submode) self.set_int(event.payload("_status"), status) self.set_int(event.payload("_ns_level"), ns_level) self.stream.append_event(event) self.stream.flush() def write_sched_wakeup(self, time_ms, cpu_id, comm, tid, prio, target_cpu): event = CTFWriter.Event(self.sched_wakeup) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_char_array(event.payload("_comm"), comm) self.set_int(event.payload("_tid"), tid) self.set_int(event.payload("_prio"), prio) self.set_int(event.payload("_target_cpu"), target_cpu) self.stream.append_event(event) self.stream.flush() def write_sched_waking(self, time_ms, cpu_id, comm, tid, prio, target_cpu): event = CTFWriter.Event(self.sched_waking) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_char_array(event.payload("_comm"), comm) self.set_int(event.payload("_tid"), tid) self.set_int(event.payload("_prio"), prio) self.set_int(event.payload("_target_cpu"), target_cpu) self.stream.append_event(event) self.stream.flush() def write_block_rq_complete(self, time_ms, cpu_id, dev, sector, nr_sector, errors, rwbs, _cmd_length, cmd): event = CTFWriter.Event(self.block_rq_complete) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_dev"), dev) self.set_int(event.payload("_sector"), sector) self.set_int(event.payload("_nr_sector"), nr_sector) self.set_int(event.payload("_errors"), errors) self.set_int(event.payload("_rwbs"), rwbs) self.set_int(event.payload("__cmd_length"), _cmd_length) self.set_char_array(event.payload("_cmd"), cmd) self.stream.append_event(event) self.stream.flush() def write_block_rq_issue(self, time_ms, cpu_id, dev, sector, nr_sector, bytes, tid, rwbs, _cmd_length, cmd, comm): event = CTFWriter.Event(self.block_rq_issue) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_dev"), dev) self.set_int(event.payload("_sector"), sector) self.set_int(event.payload("_nr_sector"), nr_sector) self.set_int(event.payload("_bytes"), bytes) self.set_int(event.payload("_tid"), tid) self.set_int(event.payload("_rwbs"), rwbs) self.set_int(event.payload("__cmd_length"), _cmd_length) self.set_char_array(event.payload("_cmd"), cmd) self.set_char_array(event.payload("_comm"), comm) self.stream.append_event(event) self.stream.flush() def write_net_dev_xmit(self, time_ms, cpu_id, skbaddr, rc, len, name): event = CTFWriter.Event(self.net_dev_xmit) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_skbaddr"), skbaddr) self.set_int(event.payload("_rc"), rc) self.set_int(event.payload("_len"), len) self.set_string(event.payload("_name"), name) self.stream.append_event(event) self.stream.flush() def write_netif_receive_skb(self, time_ms, cpu_id, skbaddr, len, name): event = CTFWriter.Event(self.netif_receive_skb) self.clock.time = time_ms * 1000000 self.set_int(event.payload("_cpu_id"), cpu_id) self.set_int(event.payload("_skbaddr"), skbaddr) self.set_int(event.payload("_len"), len) self.set_string(event.payload("_name"), name) self.stream.append_event(event) self.stream.flush() def write_sched_switch(self, time_ms, cpu_id, prev_comm, prev_tid, next_comm, next_tid, prev_prio=20, prev_state=1, next_prio=20): event = CTFWriter.Event(self.sched_switch) self.clock.time = time_ms * 1000000 self.set_char_array(event.payload("_prev_comm"), prev_comm) self.set_int(event.payload("_prev_tid"), prev_tid) self.set_int(event.payload("_prev_prio"), prev_prio) self.set_int(event.payload("_prev_state"), prev_state) self.set_char_array(event.payload("_next_comm"), next_comm) self.set_int(event.payload("_next_tid"), next_tid) self.set_int(event.payload("_next_prio"), next_prio) self.set_int(event.payload("_cpu_id"), cpu_id) self.stream.append_event(event) self.stream.flush() def sched_switch_50pc(self, start_time_ms, end_time_ms, cpu_id, period, comm1, tid1, comm2, tid2): current = start_time_ms while current < end_time_ms: self.write_sched_switch(current, cpu_id, comm1, tid1, comm2, tid2) current += period self.write_sched_switch(current, cpu_id, comm2, tid2, comm1, tid1) current += period lttnganalyses-0.6.1/tests/integration/test_cputop.py0000664000175000017500000000455312723101552024473 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from .analysis_test import AnalysisTest class CpuTest(AnalysisTest): def write_trace(self): # runs the whole time: 100% self.trace_writer.write_sched_switch(1000, 5, 'swapper/5', 0, 'prog100pc-cpu5', 42) # runs for 2s alternating with swapper out every 100ms self.trace_writer.sched_switch_50pc(1100, 5000, 0, 100, 'swapper/0', 0, 'prog20pc-cpu0', 30664) # runs for 2.5s alternating with swapper out every 100ms self.trace_writer.sched_switch_50pc(5100, 10000, 1, 100, 'swapper/1', 0, 'prog25pc-cpu1', 30665) # switch out prog100pc-cpu5 self.trace_writer.write_sched_switch(11000, 5, 'prog100pc-cpu5', 42, 'swapper/5', 0) self.trace_writer.flush() def test_cputop(self): test_name = 'cputop' expected = self.get_expected_output(test_name) result = self.get_cmd_output('lttng-cputop', options='--no-intersection') self._assertMultiLineEqual(result, expected, test_name) lttnganalyses-0.6.1/tests/integration/test_io.py0000664000175000017500000001001012723101552023551 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from .analysis_test import AnalysisTest class IoTest(AnalysisTest): def write_trace(self): # app (99) is known at statedump self.trace_writer.write_lttng_statedump_process_state( 1000, 0, 99, 99, 99, 99, 98, 98, 'app', 0, 5, 0, 5, 0) # app2 (100) unknown at statedump has testfile, FD 3 defined at # statedump self.trace_writer.write_lttng_statedump_file_descriptor( 1001, 0, 100, 3, 0, 0, 'testfile') # app write 10 bytes to FD 4 self.trace_writer.write_sched_switch(1002, 0, 'swapper/0', 0, 'app', 99) self.trace_writer.write_syscall_write(1004, 0, 1, 4, 0xabcd, 10, 10) # app2 reads 100 bytes in FD 3 self.trace_writer.write_sched_switch(1006, 0, 'app', 99, 'app2', 100) self.trace_writer.write_syscall_read(1008, 0, 1, 3, 0xcafe, 100, 100) # app3 and its FD 3 are completely unknown at statedump, tries to read # 100 bytes from FD 3 but only gets 42 self.trace_writer.write_sched_switch(1010, 0, 'app2', 100, 'app3', 101) self.trace_writer.write_syscall_read(1012, 0, 1, 3, 0xcafe, 100, 42) # block write self.trace_writer.write_block_rq_issue(1015, 0, 264241152, 33, 10, 40, 99, 0, 0, '', 'app') self.trace_writer.write_block_rq_complete(1016, 0, 264241152, 33, 10, 0, 0, 0, '') # block read self.trace_writer.write_block_rq_issue(1017, 0, 8388608, 33, 20, 90, 101, 1, 0, '', 'app3') self.trace_writer.write_block_rq_complete(1018, 0, 8388608, 33, 20, 0, 1, 0, '') # net xmit self.trace_writer.write_net_dev_xmit(1020, 2, 0xff, 32, 100, 'wlan0') # net receive self.trace_writer.write_netif_receive_skb(1021, 1, 0xff, 100, 'wlan1') self.trace_writer.write_netif_receive_skb(1022, 1, 0xff, 200, 'wlan0') # syscall open self.trace_writer.write_syscall_open(1023, 0, 1, 'test/open/file', 0, 0, 42) self.trace_writer.flush() def test_iousagetop(self): test_name = 'iousagetop' expected = self.get_expected_output(test_name) result = self.get_cmd_output('lttng-iousagetop', options='--no-intersection') self._assertMultiLineEqual(result, expected, test_name) def test_iolatencytop(self): test_name = 'iolatencytop' expected = self.get_expected_output(test_name) result = self.get_cmd_output('lttng-iolatencytop', options='--no-intersection') self._assertMultiLineEqual(result, expected, test_name) lttnganalyses-0.6.1/tests/integration/gen_ctfwriter.py0000775000175000017500000001216712726625546025010 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. # Helper tool to generate CTFWriter code from the metadata of an existing # trace. # It used to add code in TraceTest.py. # Only the basic types are supported, a warning is generated if a field cannot # be generated so it is easy to look manually at the metadata and fix it. import sys import argparse from babeltrace import TraceCollection, CTFScope, CTFTypeId def sanitize(s): """Replace special characters in s by underscores. This makes s suitable to use in code as a function or variable name. """ s = s.replace(':', '_') return s def get_definition_type(field, event): event_name = sanitize(event.name) if field.type == CTFTypeId.INTEGER: signed = '' if field.signedness == 0: signed = 'u' length = field.length print(' self.%s.add_field(self.%sint%s_type, "_%s")' % (event_name, signed, length, field.name)) elif field.type == CTFTypeId.ARRAY: print(' self.%s.add_field(self.array%s_type, "_%s")' % (event_name, field.length, field.name)) elif field.type == CTFTypeId.STRING: print(' self.%s.add_field(self.string_type, "_%s")' % (event_name, field.name)) else: print(' # FIXME %s.%s: Unhandled type %d' % (event.name, field.name, field.type)) def gen_define(event): fields = [] event_name = sanitize(event.name) print(' def define_%s(self):' % (event_name)) print(' self.%s = CTFWriter.EventClass("%s")' % (event_name, event.name)) for field in event.fields: if field.scope == CTFScope.EVENT_FIELDS: fname = field.name fields.append(fname) get_definition_type(field, event) print(' self.add_event(self.%s)' % event_name) print('') return fields def gen_write(event, fields): f_list = '' for f in fields: f_list += ', {}'.format(f) event_name = sanitize(event.name) print(' def write_%s(self, time_ms, cpu_id%s):' % (event_name, f_list)) print(' event = CTFWriter.Event(self.%s)' % (event_name)) print(' self.clock.time = time_ms * 1000000') print(' self.set_int(event.payload("_cpu_id"), cpu_id)') for field in event.fields: if field.scope == CTFScope.EVENT_FIELDS: fname = field.name if field.type == CTFTypeId.INTEGER: print(' self.set_int(event.payload("_%s"), %s)' % (fname, fname)) elif field.type == CTFTypeId.ARRAY: print(' self.set_char_array(event.payload("_%s"), ' '%s)' % (fname, fname)) elif field.type == CTFTypeId.STRING: print(' self.set_string(event.payload("_%s"), %s)' % (fname, fname)) else: print(' # FIXME %s.%s: Unhandled type %d' % (event.name, field.name, field.type)) print(' self.stream.append_event(event)') print(' self.stream.flush()') print('') def gen_parser(handle, args): for h in handle.values(): for event in h.events: fields = gen_define(event) gen_write(event, fields) if __name__ == "__main__": parser = argparse.ArgumentParser(description='CTFWriter code generator') parser.add_argument('path', metavar="", help='Trace path') args = parser.parse_args() traces = TraceCollection() handle = traces.add_traces_recursive(args.path, "ctf") if handle is None: sys.exit(1) gen_parser(handle, args) for h in handle.values(): traces.remove_trace(h) lttnganalyses-0.6.1/tests/integration/test_intersect.py0000664000175000017500000000517012723101552025155 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import unittest from lttnganalyses.common import trace_utils from .analysis_test import AnalysisTest class IntersectTest(AnalysisTest): def write_trace(self): # Write these events in the default stream. self.trace_writer.write_softirq_raise(1005, 3, 1) self.trace_writer.write_softirq_entry(1006, 3, 1) self.trace_writer.write_softirq_exit(1009, 3, 1) # Override the default stream, so all new events are written # in a different stream, no overlapping timestamps between streams. self.trace_writer.create_stream() self.trace_writer.write_softirq_exit(1010, 2, 7) self.trace_writer.flush() @unittest.skipIf(trace_utils.read_babeltrace_version() < trace_utils.BT_INTERSECT_VERSION, "not supported by Babeltrace < %s" % trace_utils.BT_INTERSECT_VERSION,) def test_no_intersection(self): test_name = 'no_intersection' expected = self.get_expected_output(test_name) result = self.get_cmd_output('lttng-irqstats') self._assertMultiLineEqual(result, expected, test_name) def test_disable_intersect(self): test_name = 'disable_intersect' expected = self.get_expected_output(test_name) result = self.get_cmd_output('lttng-irqstats', options='--no-intersection') self._assertMultiLineEqual(result, expected, test_name) lttnganalyses-0.6.1/tests/integration/test_irq.py0000664000175000017500000001115512723101552023750 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Julien Desfossez # Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from .analysis_test import AnalysisTest class IrqTest(AnalysisTest): def write_trace(self): self.trace_writer.write_softirq_raise(1000, 1, 1) self.trace_writer.write_softirq_raise(1001, 3, 1) self.trace_writer.write_softirq_raise(1002, 1, 9) self.trace_writer.write_softirq_exit(1003, 0, 4) self.trace_writer.write_softirq_raise(1004, 3, 9) self.trace_writer.write_softirq_raise(1005, 3, 7) self.trace_writer.write_softirq_entry(1006, 3, 1) self.trace_writer.write_softirq_entry(1007, 1, 1) self.trace_writer.write_softirq_exit(1008, 1, 1) self.trace_writer.write_softirq_exit(1009, 3, 1) self.trace_writer.write_softirq_entry(1010, 1, 9) self.trace_writer.write_softirq_entry(1011, 3, 7) self.trace_writer.write_softirq_exit(1012, 1, 9) self.trace_writer.write_softirq_exit(1013, 3, 7) self.trace_writer.write_softirq_entry(1014, 3, 9) self.trace_writer.write_softirq_exit(1015, 3, 9) self.trace_writer.write_irq_handler_entry(1016, 0, 41, 'ahci') self.trace_writer.write_softirq_raise(1017, 0, 4) self.trace_writer.write_irq_handler_exit(1018, 0, 41, 1) self.trace_writer.write_softirq_entry(1019, 0, 4) self.trace_writer.write_softirq_exit(1020, 0, 4) self.trace_writer.write_irq_handler_entry(1021, 0, 41, 'ahci') self.trace_writer.write_softirq_raise(1022, 0, 4) self.trace_writer.write_irq_handler_exit(1023, 0, 41, 1) self.trace_writer.write_softirq_entry(1024, 0, 4) self.trace_writer.write_softirq_exit(1025, 0, 4) self.trace_writer.write_irq_handler_entry(1026, 0, 41, 'ahci') self.trace_writer.write_softirq_raise(1027, 0, 4) self.trace_writer.write_irq_handler_exit(1028, 0, 41, 1) self.trace_writer.write_softirq_entry(1029, 0, 4) self.trace_writer.write_softirq_exit(1030, 0, 4) self.trace_writer.write_irq_handler_entry(1031, 0, 41, 'ahci') self.trace_writer.write_softirq_raise(1032, 0, 4) self.trace_writer.write_irq_handler_exit(1033, 0, 41, 1) self.trace_writer.write_softirq_entry(1034, 0, 4) self.trace_writer.write_softirq_exit(1035, 0, 4) self.trace_writer.write_irq_handler_entry(1036, 0, 41, 'ahci') self.trace_writer.write_softirq_raise(1037, 0, 4) self.trace_writer.write_irq_handler_exit(1038, 0, 41, 1) self.trace_writer.write_softirq_entry(1039, 0, 4) self.trace_writer.write_softirq_exit(1040, 0, 4) self.trace_writer.write_irq_handler_entry(1041, 0, 41, 'ahci') self.trace_writer.write_softirq_raise(1042, 0, 4) self.trace_writer.write_irq_handler_exit(1043, 0, 41, 1) self.trace_writer.write_softirq_entry(1044, 0, 4) self.trace_writer.write_softirq_exit(1045, 0, 4) self.trace_writer.flush() def test_irqstats(self): test_name = 'irqstats' expected = self.get_expected_output(test_name) result = self.get_cmd_output('lttng-irqstats', options='--no-intersection') self._assertMultiLineEqual(result, expected, test_name) def test_irqlog(self): test_name = 'irqlog' expected = self.get_expected_output(test_name) result = self.get_cmd_output('lttng-irqlog', options='--no-intersection') self._assertMultiLineEqual(result, expected, test_name) lttnganalyses-0.6.1/tests/integration/__init__.py0000664000175000017500000000217012723101501023644 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. lttnganalyses-0.6.1/tests/integration/expected/0000775000175000017500000000000013033742625023351 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/tests/integration/expected/no_intersection.txt0000664000175000017500000000010312725622423027306 0ustar mjeansonmjeanson00000000000000Error: Trace has no intersection. Use --no-intersection to overridelttnganalyses-0.6.1/tests/integration/expected/iousagetop.txt0000664000175000017500000001535412725622423026301 0ustar mjeansonmjeanson00000000000000Timerange: [1970-01-01 00:00:01.000000000, 1970-01-01 00:00:01.024000000] Per-process I/O Read Process Disk Net Unknown ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 100 B app2 (100) 0 B 0 B 100 B █████████████████████████████████ 42 B app3 (unknown (tid=101)) 0 B 0 B 42 B 0 B app (99) 0 B 0 B 0 B Per-process I/O Write Process Disk Net Unknown ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 10 B app (99) 0 B 0 B 10 B 0 B app2 (100) 0 B 0 B 0 B 0 B app3 (unknown (tid=101)) 0 B 0 B 0 B Per-file I/O Read Path ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 100 B testfile █████████████████████████████████ 42 B unknown (app3) Per-file I/O Write Path ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 10 B unknown (app) Block I/O Read Process ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 5.00 KiB app (pid=99) Block I/O Write Process ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 10.00 KiB app3 (pid=unknown (tid=101)) Disk Requests Sector Count Disk ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 20 sectors (8,0) ████████████████████████████████████████ 10 sectors (252,0) Disk Request Count Disk ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 1 requests (252,0) ████████████████████████████████████████████████████████████████████████████████ 1 requests (8,0) Disk Request Average Latency Disk ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 1.00 ms (252,0) ████████████████████████████████████████████████████████████████████████████████ 1.00 ms (8,0) Network Received Bytes Interface ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 200 B wlan0 ████████████████████████████████████████ 100 B wlan1 Network Sent Bytes Interface ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 100 B wlan0 0 B wlan1 lttnganalyses-0.6.1/tests/integration/expected/iolatencytop.txt0000664000175000017500000000221612725622423026625 0ustar mjeansonmjeanson00000000000000Timerange: [1970-01-01 00:00:01.000000000, 1970-01-01 00:00:01.024000000] Top system call latencies open (usec) Begin End Name Duration (usec) Size Proc PID Filename [00:00:01.023000000, 00:00:01.024000000] open 1000.000 N/A app3 101 test/open/file (fd=42) Top system call latencies read (usec) Begin End Name Duration (usec) Size Proc PID Filename [00:00:01.008000000, 00:00:01.009000000] read 1000.000 100 B app2 100 testfile (fd=3) [00:00:01.012000000, 00:00:01.013000000] read 1000.000 42 B app3 101 unknown (fd=3) Top system call latencies write (usec) Begin End Name Duration (usec) Size Proc PID Filename [00:00:01.004000000, 00:00:01.005000000] write 1000.000 10 B app 99 unknown (fd=4)lttnganalyses-0.6.1/tests/integration/expected/irqlog.txt0000664000175000017500000000414312725622423025411 0ustar mjeansonmjeanson00000000000000Timerange: [1970-01-01 00:00:01.000000000, 1970-01-01 00:00:01.045000000] Begin End Duration (us) CPU Type # Name [00:00:01.007000000, 00:00:01.008000000] 1000.000 1 SoftIRQ 1 TIMER_SOFTIRQ (raised at 00:00:01.000000000) [00:00:01.006000000, 00:00:01.009000000] 3000.000 3 SoftIRQ 1 TIMER_SOFTIRQ (raised at 00:00:01.001000000) [00:00:01.010000000, 00:00:01.012000000] 2000.000 1 SoftIRQ 9 RCU_SOFTIRQ (raised at 00:00:01.002000000) [00:00:01.011000000, 00:00:01.013000000] 2000.000 3 SoftIRQ 7 SCHED_SOFTIRQ (raised at 00:00:01.005000000) [00:00:01.014000000, 00:00:01.015000000] 1000.000 3 SoftIRQ 9 RCU_SOFTIRQ (raised at 00:00:01.004000000) [00:00:01.016000000, 00:00:01.018000000] 2000.000 0 IRQ 41 ahci [00:00:01.019000000, 00:00:01.020000000] 1000.000 0 SoftIRQ 4 BLOCK_SOFTIRQ (raised at 00:00:01.017000000) [00:00:01.021000000, 00:00:01.023000000] 2000.000 0 IRQ 41 ahci [00:00:01.024000000, 00:00:01.025000000] 1000.000 0 SoftIRQ 4 BLOCK_SOFTIRQ (raised at 00:00:01.022000000) [00:00:01.026000000, 00:00:01.028000000] 2000.000 0 IRQ 41 ahci [00:00:01.029000000, 00:00:01.030000000] 1000.000 0 SoftIRQ 4 BLOCK_SOFTIRQ (raised at 00:00:01.027000000) [00:00:01.031000000, 00:00:01.033000000] 2000.000 0 IRQ 41 ahci [00:00:01.034000000, 00:00:01.035000000] 1000.000 0 SoftIRQ 4 BLOCK_SOFTIRQ (raised at 00:00:01.032000000) [00:00:01.036000000, 00:00:01.038000000] 2000.000 0 IRQ 41 ahci [00:00:01.039000000, 00:00:01.040000000] 1000.000 0 SoftIRQ 4 BLOCK_SOFTIRQ (raised at 00:00:01.037000000) [00:00:01.041000000, 00:00:01.043000000] 2000.000 0 IRQ 41 ahci [00:00:01.044000000, 00:00:01.045000000] 1000.000 0 SoftIRQ 4 BLOCK_SOFTIRQ (raised at 00:00:01.042000000)lttnganalyses-0.6.1/tests/integration/expected/irqstats.txt0000664000175000017500000000255412725622423025772 0ustar mjeansonmjeanson00000000000000Timerange: [1970-01-01 00:00:01.000000000, 1970-01-01 00:00:01.045000000] Hard IRQ Duration (us) count min avg max stdev ----------------------------------------------------------------------------------| 41: 6 2000.000 2000.000 2000.000 0.000 | Soft IRQ Duration (us) Raise latency (us) count min avg max stdev | count min avg max stdev ----------------------------------------------------------------------------------|------------------------------------------------------------ 1: 2 1000.000 2000.000 3000.000 1414.214 | 2 5000.000 6000.000 7000.000 1414.214 4: 6 1000.000 1000.000 1000.000 0.000 | 6 2000.000 2000.000 2000.000 0.000 7: 1 2000.000 2000.000 2000.000 ? | 1 6000.000 6000.000 6000.000 ? 9: 2 1000.000 1500.000 2000.000 707.107 | 2 8000.000 9000.000 10000.000 1414.214lttnganalyses-0.6.1/tests/integration/expected/cputop.txt0000664000175000017500000000336112775773625025447 0ustar mjeansonmjeanson00000000000000Timerange: [1970-01-01 00:00:01.000000000, 1970-01-01 00:00:11.000000000] Per-TID Usage Process Migrations Priorities ################################################################################ ████████████████████████████████████████████████████████████████████████████████ 100.00 % prog100pc-cpu5 (42) 0 [20] ████████████████████ 25.00 % prog25pc-cpu1 (30665) 0 [20] ████████████████ 20.00 % prog20pc-cpu0 (30664) 0 [20] 0.00 % swapper/5 (0) 0 [] Per-CPU Usage ################################################################################ ████████████████ 21.00 % CPU 0 ████████████████████████████████████████████████████ 66.00 % CPU 1 ████████████████████████████████████████████████████████████████████████████████ 100.00 % CPU 5 Total CPU Usage: 62.33% lttnganalyses-0.6.1/tests/integration/expected/disable_intersect.txt0000664000175000017500000000117312725622423027577 0ustar mjeansonmjeanson00000000000000Timerange: [1970-01-01 00:00:01.005000000, 1970-01-01 00:00:01.010000000] Soft IRQ Duration (us) Raise latency (us) count min avg max stdev | count min avg max stdev ----------------------------------------------------------------------------------|------------------------------------------------------------ 1: 1 3000.000 3000.000 3000.000 ? | 1 1000.000 1000.000 1000.000 ?lttnganalyses-0.6.1/tests/__init__.py0000664000175000017500000000225012746676752021356 0ustar mjeansonmjeanson00000000000000# The MIT License (MIT) # # Copyright (C) 2016 - Antoine Busque # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from . import common from . import integration lttnganalyses-0.6.1/lttnganalyses.egg-info/0000775000175000017500000000000013033742625022445 5ustar mjeansonmjeanson00000000000000lttnganalyses-0.6.1/lttnganalyses.egg-info/entry_points.txt0000664000175000017500000000373713033742625025755 0ustar mjeansonmjeanson00000000000000[console_scripts] lttng-cputop = lttnganalyses.cli.cputop:run lttng-cputop-mi = lttnganalyses.cli.cputop:run_mi lttng-iolatencyfreq = lttnganalyses.cli.io:runfreq lttng-iolatencyfreq-mi = lttnganalyses.cli.io:runfreq_mi lttng-iolatencystats = lttnganalyses.cli.io:runstats lttng-iolatencystats-mi = lttnganalyses.cli.io:runstats_mi lttng-iolatencytop = lttnganalyses.cli.io:runlatencytop lttng-iolatencytop-mi = lttnganalyses.cli.io:runlatencytop_mi lttng-iolog = lttnganalyses.cli.io:runlog lttng-iolog-mi = lttnganalyses.cli.io:runlog_mi lttng-iousagetop = lttnganalyses.cli.io:runusage lttng-iousagetop-mi = lttnganalyses.cli.io:runusage_mi lttng-irqfreq = lttnganalyses.cli.irq:runfreq lttng-irqfreq-mi = lttnganalyses.cli.irq:runfreq_mi lttng-irqlog = lttnganalyses.cli.irq:runlog lttng-irqlog-mi = lttnganalyses.cli.irq:runlog_mi lttng-irqstats = lttnganalyses.cli.irq:runstats lttng-irqstats-mi = lttnganalyses.cli.irq:runstats_mi lttng-memtop = lttnganalyses.cli.memtop:run lttng-memtop-mi = lttnganalyses.cli.memtop:run_mi lttng-periodfreq = lttnganalyses.cli.periods:runfreq lttng-periodfreq-mi = lttnganalyses.cli.periods:runfreq_mi lttng-periodlog = lttnganalyses.cli.periods:runlog lttng-periodlog-mi = lttnganalyses.cli.periods:runlog_mi lttng-periodstats = lttnganalyses.cli.periods:runstats lttng-periodstats-mi = lttnganalyses.cli.periods:runstats_mi lttng-periodtop = lttnganalyses.cli.periods:runtop lttng-periodtop-mi = lttnganalyses.cli.periods:runtop_mi lttng-schedfreq = lttnganalyses.cli.sched:runfreq lttng-schedfreq-mi = lttnganalyses.cli.sched:runfreq_mi lttng-schedlog = lttnganalyses.cli.sched:runlog lttng-schedlog-mi = lttnganalyses.cli.sched:runlog_mi lttng-schedstats = lttnganalyses.cli.sched:runstats lttng-schedstats-mi = lttnganalyses.cli.sched:runstats_mi lttng-schedtop = lttnganalyses.cli.sched:runtop lttng-schedtop-mi = lttnganalyses.cli.sched:runtop_mi lttng-syscallstats = lttnganalyses.cli.syscallstats:run lttng-syscallstats-mi = lttnganalyses.cli.syscallstats:run_mi lttnganalyses-0.6.1/lttnganalyses.egg-info/PKG-INFO0000664000175000017500000023373213033742625023554 0ustar mjeansonmjeanson00000000000000Metadata-Version: 1.1 Name: lttnganalyses Version: 0.6.1 Summary: LTTng analyses Home-page: https://github.com/lttng/lttng-analyses Author: Julien Desfossez Author-email: jdesfossez@efficios.com License: MIT Description: LTTng analyses ************** .. image:: https://img.shields.io/pypi/v/lttnganalyses.svg?label=Latest%20version :target: https://pypi.python.org/pypi/lttnganalyses :alt: Latest version released on PyPi .. image:: https://travis-ci.org/lttng/lttng-analyses.svg?branch=master&label=Travis%20CI%20build :target: https://travis-ci.org/lttng/lttng-analyses :alt: Status of Travis CI .. image:: https://img.shields.io/jenkins/s/https/ci.lttng.org/lttng-analyses_master_build.svg?label=LTTng%20CI%20build :target: https://ci.lttng.org/job/lttng-analyses_master_build :alt: Status of LTTng CI The **LTTng analyses** are a set of various executable analyses to extract and visualize monitoring data and metrics from `LTTng `_ kernel traces on the command line. As opposed to other "live" diagnostic or monitoring solutions, this approach is based on the following workflow: #. Record your system's activity with LTTng, a low-overhead tracer. #. Do whatever it takes for your problem to occur. #. Diagnose your problem's cause **offline** (when tracing is stopped). This solution allows you to target problems that are hard to find and to "dig" until the root cause is found. **Current limitations**: - The LTTng analyses can be quite slow to execute. There are a number of places where they could be optimized, but using the Python interpreter seems to be an important impediment. This project is regarded by its authors as a testing ground to experiment analysis features, user interfaces, and usability in general. It is not considered ready to analyze long traces. **Contents**: .. contents:: :local: :depth: 3 :backlinks: none Install LTTng analyses ====================== .. NOTE:: The version 2.0 of `Trace Compass `_ requires LTTng analyses 0.4: Trace Compass 2.0 is not compatible with LTTng analyses 0.5 and after. In this case, we suggest that you install LTTng analyses from the ``stable-0.4`` branch of the project's Git repository (see `Install from the Git repository`_). You can also `download `_ the latest 0.4 release tarball and follow the `Install from a release tarball`_ procedure. Required dependencies --------------------- - `Python `_ ≥ 3.4 - `setuptools `_ - `pyparsing `_ ≥ 2.0.0 - `Babeltrace `_ ≥ 1.2 with Python bindings (``--enable-python-bindings`` when building from source) Optional dependencies --------------------- - `LTTng `_ ≥ 2.5: to use the ``lttng-analyses-record`` script and to trace the system in general - `termcolor `_: color support - `progressbar `_: terminal progress bar support (this is not required for the machine interface's progress indication feature) Install from PyPI (online repository) ------------------------------------- To install the latest LTTng analyses release on your system from `PyPI `_: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. Make sure ``pip`` for Python 3 is installed on your system. The package is named ``python3-pip`` on most distributions (``python-pip`` on Arch Linux). #. Use ``pip3`` to install LTTng analyses: .. code-block:: bash sudo pip3 install --upgrade lttnganalyses Note that you can also install LTTng analyses locally, only for your user: .. code-block:: bash pip3 install --user --upgrade lttnganalyses Files are installed in ``~/.local``, therefore ``~/.local/bin`` must be part of your ``PATH`` environment variable for the LTTng analyses to be launchable. Install from a release tarball ------------------------------ To install a specific LTTng analyses release (tarball) on your system: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. `Download `_ and extract the desired release tarball. #. Use ``setup.py`` to install LTTng analyses: .. code-block:: bash sudo ./setup.py install Install from the Git repository ------------------------------- To install LTTng analyses from a specific branch or tag of the project's Git repository: #. Install the required dependencies. #. **Optional**: Install the optional dependencies. #. Make sure ``pip`` for Python 3 is installed on your system. The package is named ``python3-pip`` on most distributions (``python-pip`` on Arch Linux). #. Use ``pip3`` to install LTTng analyses: .. code-block:: bash sudo pip3 install --upgrade git+git://github.com/lttng/lttng-analyses.git@master Replace ``master`` with the desired branch or tag name to install in the previous URL. Note that you can also install LTTng analyses locally, only for your user: .. code-block:: bash sudo pip3 install --user --upgrade git+git://github.com/lttng/lttng-analyses.git@master Files are installed in ``~/.local``, therefore ``~/.local/bin`` must be part of your ``PATH`` environment variable for the LTTng analyses to be launchable. Install on Ubuntu ----------------- To install LTTng analyses on Ubuntu ≥ 12.04: #. Add the *LTTng Latest Stable* PPA repository: .. code-block:: bash sudo apt-get install -y software-properties-common sudo apt-add-repository -y ppa:lttng/ppa sudo apt-get update Replace ``software-properties-common`` with ``python-software-properties`` on Ubuntu 12.04. #. Install the required dependencies: .. code-block:: bash sudo apt-get install -y babeltrace sudo apt-get install -y python3-babeltrace sudo apt-get install -y python3-setuptools On Ubuntu > 12.04: .. code-block:: bash sudo apt-get install -y python3-pyparsing On Ubuntu 12.04: .. code-block:: bash sudo pip3 install --upgrade pyparsing #. **Optional**: Install the optional dependencies: .. code-block:: bash sudo apt-get install -y lttng-tools sudo apt-get install -y lttng-modules-dkms sudo apt-get install -y python3-progressbar sudo apt-get install -y python3-termcolor #. Install LTTng analyses: .. code-block:: bash sudo apt-get install -y python3-lttnganalyses Install on Debian "sid" ----------------------- To install LTTng analyses on Debian "sid": #. Install the required dependencies: .. code-block:: bash sudo apt-get install -y babeltrace sudo apt-get install -y python3-babeltrace sudo apt-get install -y python3-setuptools sudo apt-get install -y python3-pyparsing #. **Optional**: Install the optional dependencies: .. code-block:: bash sudo apt-get install -y lttng-tools sudo apt-get install -y lttng-modules-dkms sudo apt-get install -y python3-progressbar sudo apt-get install -y python3-termcolor #. Install LTTng analyses: .. code-block:: bash sudo apt-get install -y python3-lttnganalyses Record a trace ============== This section is a quick reminder of how to record an LTTng kernel trace. See LTTng's `quick start guide `_ to familiarize with LTTng. Automatic --------- LTTng analyses ships with a handy (installed) script, ``lttng-analyses-record``, which automates the steps to record a kernel trace with the events required by the analyses. To use ``lttng-analyses-record``: #. Launch the installed script: .. code-block:: bash lttng-analyses-record #. Do whatever it takes for your problem to occur. #. When you are done recording, press Ctrl+C where the script is running. Manual ------ To record an LTTng kernel trace suitable for the LTTng analyses: #. Create a tracing session: .. code-block:: bash sudo lttng create #. Create a channel with a large sub-buffer size: .. code-block:: bash sudo lttng enable-channel --kernel chan --subbuf-size=8M #. Create event rules to capture the needed events: .. code-block:: bash sudo lttng enable-event --kernel --channel=chan block_bio_backmerge sudo lttng enable-event --kernel --channel=chan block_bio_remap sudo lttng enable-event --kernel --channel=chan block_rq_complete sudo lttng enable-event --kernel --channel=chan block_rq_issue sudo lttng enable-event --kernel --channel=chan irq_handler_entry sudo lttng enable-event --kernel --channel=chan irq_handler_exit sudo lttng enable-event --kernel --channel=chan irq_softirq_entry sudo lttng enable-event --kernel --channel=chan irq_softirq_exit sudo lttng enable-event --kernel --channel=chan irq_softirq_raise sudo lttng enable-event --kernel --channel=chan kmem_mm_page_alloc sudo lttng enable-event --kernel --channel=chan kmem_mm_page_free sudo lttng enable-event --kernel --channel=chan lttng_statedump_block_device sudo lttng enable-event --kernel --channel=chan lttng_statedump_file_descriptor sudo lttng enable-event --kernel --channel=chan lttng_statedump_process_state sudo lttng enable-event --kernel --channel=chan mm_page_alloc sudo lttng enable-event --kernel --channel=chan mm_page_free sudo lttng enable-event --kernel --channel=chan net_dev_xmit sudo lttng enable-event --kernel --channel=chan netif_receive_skb sudo lttng enable-event --kernel --channel=chan sched_pi_setprio sudo lttng enable-event --kernel --channel=chan sched_process_exec sudo lttng enable-event --kernel --channel=chan sched_process_fork sudo lttng enable-event --kernel --channel=chan sched_switch sudo lttng enable-event --kernel --channel=chan sched_wakeup sudo lttng enable-event --kernel --channel=chan sched_waking sudo lttng enable-event --kernel --channel=chan softirq_entry sudo lttng enable-event --kernel --channel=chan softirq_exit sudo lttng enable-event --kernel --channel=chan softirq_raise sudo lttng enable-event --kernel --channel=chan --syscall --all #. Start recording: .. code-block:: bash sudo lttng start #. Do whatever it takes for your problem to occur. #. Stop recording and destroy the tracing session to free its resources: .. code-block:: bash sudo lttng stop sudo lttng destroy See the `LTTng Documentation `_ for other use cases, like sending the trace data over the network instead of recording trace files on the target's file system. Run an LTTng analysis ===================== The **LTTng analyses** are a set of various command-line analyses. Each analysis accepts the path to a recorded trace (see `Record a trace`_) as its argument, as well as various command-line options to control the analysis and its output. Many command-line options are common to all the analyses, so that you can filter by timerange, process name, process ID, minimum and maximum values, and the rest. Also note that the reported timestamps can optionally be expressed in the GMT time zone. Each analysis is installed as an executable starting with the ``lttng-`` prefix. .. list-table:: Available LTTng analyses :header-rows: 1 * - Command - Description * - ``lttng-cputop`` - Per-TID, per-CPU, and total top CPU usage. * - ``lttng-iolatencyfreq`` - I/O request latency distribution. * - ``lttng-iolatencystats`` - Partition and system call latency statistics. * - ``lttng-iolatencytop`` - Top system call latencies. * - ``lttng-iolog`` - I/O operations log. * - ``lttng-iousagetop`` - I/O usage top. * - ``lttng-irqfreq`` - Interrupt handler duration frequency distribution. * - ``lttng-irqlog`` - Interrupt log. * - ``lttng-irqstats`` - Hardware and software interrupt statistics. * - ``lttng-memtop`` - Per-TID top allocated/freed memory. * - ``lttng-schedfreq`` - Scheduling latency frequency distribution. * - ``lttng-schedlog`` - Scheduling top. * - ``lttng-schedstats`` - Scheduling latency stats. * - ``lttng-schedtop`` - Scheduling top. * - ``lttng-periodlog`` - Period log. * - ``lttng-periodstats`` - Period duration stats. * - ``lttng-periodtop`` - Period duration top. * - ``lttng-periodfreq`` - Period duration frequency distribution. * - ``lttng-syscallstats`` - Per-TID and global system call statistics. Use the ``--help`` option of any command to list the descriptions of the possible command-line options. .. NOTE:: You can set the ``LTTNG_ANALYSES_DEBUG`` environment variable to ``1`` when you launch an analysis to enable a debug output. You can also use the general ``--debug`` option. Filtering options ----------------- Depending on the analysis, filter options are available. The complete list of filter options is: .. list-table:: Available filtering command-line options :header-rows: 1 * - Command-line option - Description * - ``--begin`` - Trace time at which to begin the analysis. Format: ``HH:MM:SS[.NNNNNNNNN]``. * - ``--cpu`` - Comma-delimited list of CPU IDs for which to display the results. * - ``--end`` - Trace time at which to end the analysis. Format: ``HH:MM:SS[.NNNNNNNNN]``. * - ``--irq`` - List of hardware IRQ numbers for which to display the results. * - ``--limit`` - Maximum number of output rows per table. This option is useful for "top" analyses, like ``lttng-cputop``. * - ``--min`` - Minimum duration (µs) to keep in results. * - ``--minsize`` - Minimum I/O operation size (B) to keep in results. * - ``--max`` - Maximum duration (µs) to keep in results. * - ``--maxsize`` - Maximum I/O operation size (B) to keep in results. * - ``--procname`` - Comma-delimited list of process names for which to display the results. * - ``--softirq`` - List of software IRQ numbers for which to display the results. * - ``--tid`` - Comma-delimited list of thread IDs for which to display the results. Period options -------------- LTTng analyses feature a powerful "period engine". A *period* is an interval which begins and ends under specific conditions. When the analysis results are displayed, they are isolated for the periods that were opened and closed during the process. A period can have a parent. If it's the case, then its parent needs to exist for the period to begin at all. This tree structure of periods is useful to keep a form of custom user state during the generic kernel analysis. .. ATTENTION:: The ``--period`` and ``--period-captures`` options's arguments include characters that are considered special by most shells, like ``$``, ``*``, and ``&``. Make sure to always **single-quote** those arguments when running the LTTng analyses on the command line. Period definition ~~~~~~~~~~~~~~~~~ You can define one or more periods on the command line, when launching an analysis, with the ``--period`` option. This option's argument accepts the following form (content within square brackets is optional):: [ NAME [ (PARENT) ] ] : BEGINEXPR [ : ENDEXPR ] ``NAME`` Optional name of the period definition. All periods opened from this definition have this name. The syntax of this name is the same as a C identifier. ``PARENT`` Optional name of a *previously defined* period which acts as the parent period definition of this definition. ``NAME`` must be set for ``PARENT`` to be set. ``BEGINEXPR`` Matching expression which a given event must match in order for an actual period to be instantiated by this definition. ``ENDEXPR`` Matching expression which a given event must match in order for an instance of this definition to be closed. If this part is omitted, ``BEGINEXPR`` is used for the ending expression too. Matching expression ................... A matching expression is a C-like logical expression. It supports nesting expressions with ``(`` and ``)``, as well as the ``&&`` (logical *AND*), ``||`` (logical *OR*), and ``!`` (logical *NOT*) operators. The precedence of those operators is the same as in the C language. The atomic operands in those logical expressions are comparisons. For the following comparison syntaxes, consider that: - ``EVT`` indicates an event source. The available event sources are: ``$evt`` Current event. ``$begin.$evt`` In ``BEGINEXPR``: current event (same as ``$evt``). In ``ENDEXPR``: event which, for this period instance, was matched when ``BEGINEXPR`` was evaluated. ``$parent.$begin.$evt`` Event which, for the parent period instance of this period instance, was matched when ``BEGINEXPR`` of the parent was evaluated. - ``FIELD`` indicates an event field source. The available event field sources are: ``NAME`` (direct field name) Automatic scope: try to find the field named ``NAME`` in the dynamic scopes in this order: #. Event payload #. Event context #. Event header #. Stream event context #. Packet context #. Packet header ``$payload.NAME`` Event payload field named ``NAME``. ``$ctx.NAME`` Event context field named ``NAME``. ``$header.NAME`` Event header field named ``NAME``. ``$stream_ctx.NAME`` Stream event context field named ``NAME``. ``$pkt_ctx.NAME`` Packet context field named ``NAME``. ``$pkt_header.NAME`` Packet header field named ``NAME``. - ``VALUE`` indicates one of: - A constant, decimal number. This can be an integer or a real number, positive or negative, and supports the ``e`` scientific notation. Examples: ``23``, ``-18.28``, ``7.2e9``. - A double-quoted literal string. ``"`` and ``\`` can be escaped with ``\``. Examples: ``"hello, world!"``, ``"here's another \"quoted\" string"``. - An event field, that is, ``EVT.FIELD``, considering the replacements described above. - ``NUMVALUE`` indicates one of: - A constant, decimal number. This can be an integer or a real number, positive or negative, and supports the ``e`` scientific notation. Examples: ``23``, ``-18.28``, ``7.2e9``. - An event field, that is, ``EVT.FIELD``, considering the replacements described above. .. list-table:: Available comparison syntaxes for matching expressions :header-rows: 1 * - Comparison syntax - Description * - #. ``EVT.$name == "NAME"`` #. ``EVT.$name != "NAME"`` #. ``EVT.$name =* "PATTERN"`` - Name matching: #. Name of event source ``EVT`` is equal to ``NAME``. #. Name of event source ``EVT`` is not equal to ``NAME``. #. Name of event source ``EVT`` satisfies the globbing pattern ``PATTERN`` (see `fnmatch `_). * - #. ``EVT.FIELD == VALUE`` #. ``EVT.FIELD != VALUE`` #. ``EVT.FIELD < NUMVALUE`` #. ``EVT.FIELD <= NUMVALUE`` #. ``EVT.FIELD > NUMVALUE`` #. ``EVT.FIELD >= NUMVALUE`` #. ``EVT.FIELD =* "PATTERN"`` - Value matching: #. The value of the field ``EVT.FIELD`` is equal to the value ``VALUE``. #. The value of the field ``EVT.FIELD`` is not equal to the value ``VALUE``. #. The value of the field ``EVT.FIELD`` is lesser than the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is lesser than or equal to the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is greater than the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` is greater than or equal to the value ``NUMVALUE``. #. The value of the field ``EVT.FIELD`` satisfies the globbing pattern ``PATTERN`` (see `fnmatch `_). In any case, if ``EVT.FIELD`` does not target an existing field, the comparison including it fails. Also, string fields cannot be compared to number values (constant or fields). Examples ........ - Create a period instance named ``switch`` when: - The current event name is ``sched_switch``. End this period instance when: - The current event name is ``sched_switch``. Period definition:: switch : $evt.$name == "sched_switch" - Create a period instance named ``switch`` when: - The current event name is ``sched_switch`` *AND* - The current event's ``next_tid`` field is *NOT* equal to 0. End this period instance when: - The current event name is ``sched_switch`` *AND* - The current event's ``prev_tid`` field is equal to the ``next_tid`` field of the matched event in the begin expression *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression. Period definition:: switch : $evt.$name == "sched_switch" && $evt.next_tid != 0 : $evt.$name == "sched_switch" && $evt.prev_tid == $begin.$evt.next_tid && $evt.cpu_id == $begin.$evt.cpu_id - Create a period instance named ``irq`` when: - A parent period instance named ``switch`` is currently opened. - The current event name satisfies the ``irq_*_entry`` globbing pattern *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression of the parent period instance. End this period instance when: - The current event name is ``irq_handler_exit`` *AND* - The current event's ``cpu_id`` field is equal to the ``cpu_id`` field of the matched event in the begin expression. Period definition:: irq(switch) : $evt.$name =* "irq_*_entry" && $evt.cpu_id == $parent.$begin.$evt.cpu_id : $evt.$name == "irq_handler_exit" && $evt.cpu_id == $begin.$evt.cpu_id - Create a period instance named ``hello`` when: - The current event name satisfies the ``hello*`` globbing pattern, but excludes ``hello world``. End this period instance when: - The current event name is the same as the name of the matched event in the begin expression *AND* - The current event's ``theid`` header field is lesser than or equal to 231. Period definition:: hello : $evt.$name =* "hello*" && $evt.$name != "hello world" : $evt.$name == $begin.$evt.$name && $evt.$header.theid <= 231 Period captures ~~~~~~~~~~~~~~~ When a period instance begins or ends, the analysis can capture the current values of specific event fields and display them in its results. You can set period captures with the ``--period-captures`` command-line option. This option's argument accepts the following form (content within square brackets is optional):: NAME : BEGINCAPTURES [ : ENDCAPTURES ] ``NAME`` Name of period instances on which to apply those captures. A ``--period`` option in the same command line must define this name. ``BEGINCAPTURES`` Comma-delimited list of event fields to capture when the beginning expression of the period definition named ``NAME`` is matched. ``ENDCAPTURES`` Comma-delimited list of event fields to capture when the ending expression of the period definition named ``NAME`` is matched. If this part is omitted, there are no end captures. The format of ``BEGINCAPTURES`` and ``ENDCAPTURES`` is a comma-delimited list of tokens having this format:: [ CAPTURENAME = ] EVT.FIELD or:: [ CAPTURENAME = ] EVT.$name ``CAPTURENAME`` Custom name for this capture. The syntax of this name is the same as a C identifier. If this part is omitted, the literal expression used for ``EVT.FIELD`` is used. ``EVT`` and ``FIELD`` See `Matching expression`_. Period select and aggregate parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ With ``lttng-periodlog``, it is possible to see the list of periods in the context of their parent. By specifying the ``--aggregate-by``, the lines in the log present on the same line the timerange of the period specified by the ``--select`` argument at the timerange of the parent period that contains it. In ``lttng-periodstats`` and ``lttng-periodfreq``, these two flags are used as filter to limit the output to only the relevant periods. If omitted, all existing combinations of parent/child statistics and frequency distributions are output. Grouping ~~~~~~~~ When fields are captured during the period analyses, it is possible to compute the statistics and frequency distribution grouped by values of the these fields, instead of globally for the trace. The format is:: --group-by "PERIODNAME.CAPTURENAME[, PERIODNAME.CAPTURENAME]" If multiple values are passed, the analysis outputs one list of tables (statistics and/or frequency distribution) for each unique combination of the field's values. For example, if we track the ``open`` system call and we are interested in the average duration of this call by filename, we only have to capture the filename field and group the results by ``open.filename``. Examples ........ Begin captures only:: switch : $evt.next_tid, name = $evt.$name, msg_id = $parent.$begin.$evt.id Begin and end captures:: hello : beginning = $evt.$ctx.begin_ts, $evt.received_bytes : $evt.send_bytes, $evt.$name, begin = $begin.$evt.$ctx.begin_ts end = $evt.$ctx.end_ts Top scheduling latency (delay between ``sched_waking(tid=$TID)`` and ``sched_switch(next_tid=$TID)``) with recording of the procname of the waker (dependant of the ``procname`` context in the trace), priority and target CPU: .. code-block:: bash lttng-periodtop /path/to/trace \ --period 'wake : $evt.$name == "sched_waking" : $evt.$name == "sched_switch" && $evt.next_tid == $begin.$evt.$payload.tid' \ --period-capture 'wake : waker = $evt.procname, prio = $evt.prio : wakee = $evt.next_comm, cpu = $evt.cpu_id' :: Timerange: [2016-07-21 17:07:47.832234248, 2016-07-21 17:07:48.948152659] Period top Begin End Duration (us) Name Begin capture End capture [17:07:47.835338581, 17:07:47.946834976] 111496.395 wake waker = lttng-consumerd wakee = kworker/0:2 prio = 20 cpu = 0 [17:07:47.850409057, 17:07:47.946829256] 96420.199 wake waker = swapper/2 wakee = migration/0 prio = -100 cpu = 0 [17:07:48.300313282, 17:07:48.300993892] 680.610 wake waker = Xorg wakee = ibus-ui-gtk3 prio = 20 cpu = 3 [17:07:48.300330060, 17:07:48.300920648] 590.588 wake waker = Xorg wakee = ibus-x11 prio = 20 cpu = 3 Log of all the IRQ handled while a user-space process was running, capture the procname of the process interrupted, the name and number of the IRQ: .. code-block:: bash lttng-periodlog /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" && $evt.next_tid != 0 : $evt.$name == "sched_switch" && $evt.prev_tid == $begin.$evt.next_tid && $evt.cpu_id == $begin.$evt.cpu_id' \ --period 'irq(switch) : $evt.$name == "irq_handler_entry" && $evt.cpu_id == $parent.$begin.$evt.cpu_id : $evt.$name == "irq_handler_exit" && $evt.cpu_id == $begin.$evt.cpu_id' \ --period-capture 'irq : name = $evt.name, irq = $evt.irq, current = $parent.$begin.$evt.next_comm' :: Period log Begin End Duration (us) Name Begin capture End capture [10:58:26.169238875, 10:58:26.169244920] 6.045 switch [10:58:26.169598385, 10:58:26.169602967] 4.582 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.169811553, 10:58:26.169816218] 4.665 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.170025600, 10:58:26.170030197] 4.597 irq name = ahci irq = 41 current = lttng-consumerd [10:58:26.169236842, 10:58:26.170105711] 868.869 switch Log of all the ``open`` system call periods aggregated by the ``sched_switch`` in which they occurred: .. code-block:: bash lttng-periodlog /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" : $evt.$name == "sched_switch" && $begin.$evt.next_tid == $evt.prev_tid && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'open(switch) : $evt.$name == "syscall_entry_open" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "syscall_exit_open" && $begin.$evt.cpu_id == $evt.cpu_id' \ --period-captures 'switch : comm = $evt.next_comm, cpu = $evt.cpu_id, tid = $evt.next_tid' \ --period-captures 'open : filename = $evt.filename : fd = $evt.ret' \ --select open --aggregate-by switch :: Aggregated log Aggregation of (open) by switch Parent | | Durations (us) | Begin End Duration (us) Name | Child name Count | Min Avg Max Stdev Runtime | Parent captures [10:58:26.222823677, 10:58:26.224039381] 1215.704 switch | switch/open 3 | 7.517 9.548 11.248 1.887 28.644 | switch.comm = bash, switch.cpu = 3, switch.tid = 12420 [10:58:26.856224058, 10:58:26.856589867] 365.809 switch | switch/open 1 | 77.620 77.620 77.620 ? 77.620 | switch.comm = ntpd, switch.cpu = 0, switch.tid = 11132 [10:58:27.000068031, 10:58:27.000954859] 886.828 switch | switch/open 15 | 9.224 16.126 37.190 6.681 241.894 | switch.comm = irqbalance, switch.cpu = 0, switch.tid = 1656 [10:58:27.225474282, 10:58:27.229160014] 3685.732 switch | switch/open 22 | 5.797 6.767 9.308 0.972 148.881 | switch.comm = bash, switch.cpu = 1, switch.tid = 12421 Statistics about the memory allocation performed within an ``open`` system call within a single ``sched_switch`` (no blocking or preemption): .. code-block:: bash lttng-periodstats /path/to/trace \ --period 'switch : $evt.$name == "sched_switch" : $evt.$name == "sched_switch" && $begin.$evt.next_tid == $evt.prev_tid && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'open(switch) : $evt.$name == "syscall_entry_open" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "syscall_exit_open" && $begin.$evt.cpu_id == $evt.cpu_id' \ --period 'alloc(open) : $evt.$name == "kmem_cache_alloc" && $parent.$begin.$evt.cpu_id == $evt.cpu_id : $evt.$name == "kmem_cache_free" && $evt.ptr == $begin.$evt.ptr' \ --period-captures 'switch : comm = $evt.next_comm, cpu = $evt.cpu_id, tid = $evt.next_tid' \ --period-captures 'open : filename = $evt.filename : fd = $evt.ret' \ --period-captures 'alloc : ptr = $evt.ptr' :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Period tree: switch |-- open |-- alloc Period statistics (us) Period Count Min Avg Max Stdev Runtime switch 831 2.824 5233.363 172056.802 16197.531 4348924.614 switch/open 41 5.797 12.123 77.620 12.076 497.039 switch/open/alloc 44 1.152 10.277 74.476 11.582 452.175 Per-parent period duration statistics (us) With active children Period Parent Min Avg Max Stdev switch/open switch 28.644 124.260 241.894 92.667 switch/open/alloc switch 24.036 113.044 229.713 87.827 switch/open/alloc switch/open 4.550 11.029 74.476 11.768 Per-parent duration ratio (%) With active children Period Parent Min Avg Max Stdev switch/open switch 2 13.723 27 12.421 switch/open/alloc switch 1 12.901 25 12.041 switch/open/alloc switch/open 76 88.146 115 7.529 Per-parent period count statistics With active children Period Parent Min Avg Max Stdev switch/open switch 1 10.250 22 9.979 switch/open/alloc switch 1 11.000 22 10.551 switch/open/alloc switch/open 1 1.073 2 0.264 Per-parent period duration statistics (us) Globally Period Parent Min Avg Max Stdev switch/open switch 0.000 0.598 241.894 10.251 switch/open/alloc switch 0.000 0.544 229.713 9.443 switch/open/alloc switch/open 4.550 11.029 74.476 11.768 Per-parent duration ratio (%) Globally Period Parent Min Avg Max Stdev switch/open switch 0 0.066 27 1.209 switch/open/alloc switch 0 0.062 25 1.150 switch/open/alloc switch/open 76 88.146 115 7.529 Per-parent period count statistics Globally Period Parent Min Avg Max Stdev switch/open switch 0 0.049 22 0.929 switch/open/alloc switch 0 0.053 22 0.991 switch/open/alloc switch/open 1 1.073 2 0.264 These statistics can also be scoped by value of the FD returned by the ``open`` system, by appending ``--group-by "open.fd"`` to the previous command line. That way previous tables will be output for each value of FD returned, so it is possible to observe the behaviour based on the parameters of a system call. Using the ``lttng-periodfreq`` or the ``--freq`` parameter, these tables can also be presented as frequency distributions. Progress options ---------------- If the `progressbar `_ optional dependency is installed, a progress bar is available to indicate the progress of the analysis. By default, the progress bar is based on the current event's timestamp. Progress options are: .. list-table:: Available progress command-line options :header-rows: 1 * - Command-line option - Description * - ``--no-progress`` - Disable the progress bar. * - ``--progress-use-size`` - Use the approximate event size instead of the current event's timestamp to estimate the progress value. Machine interface ----------------- If you want to display LTTng analyses results in a custom viewer, you can use the JSON-based LTTng analyses machine interface (LAMI). Each command in the previous table has its corresponding LAMI version with the ``-mi`` suffix. For example, the LAMI version of ``lttng-cputop`` is ``lttng-cputop-mi``. This version of LTTng analyses conforms to `LAMI 1.0 `_. Examples ======== This section shows a few examples of using some LTTng analyses. I/O --- Partition and system call latency statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencystats /path/to/trace :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Syscalls latency statistics (usec): Type Count Min Average Max Stdev ----------------------------------------------------------------------------------------- Open 45 5.562 13.835 77.683 15.263 Read 109 0.316 5.774 62.569 9.277 Write 101 0.256 7.060 48.531 8.555 Sync 207 19.384 40.664 160.188 21.201 Disk latency statistics (usec): Name Count Min Average Max Stdev ----------------------------------------------------------------------------------------- dm-0 108 0.001 0.004 0.007 1.306 I/O request latency distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencyfreq /path/to/trace :: Timerange: [2015-01-06 10:58:26.140545481, 2015-01-06 10:58:27.229358936] Open latency distribution (usec) ############################################################################### 5.562 ███████████████████████████████████████████████████████████████████ 25 9.168 ██████████ 4 12.774 █████████████████████ 8 16.380 ████████ 3 19.986 █████ 2 23.592 0 27.198 0 30.804 0 34.410 ██ 1 38.016 0 41.623 0 45.229 0 48.835 0 52.441 0 56.047 0 59.653 0 63.259 0 66.865 0 70.471 0 74.077 █████ 2 Top system call latencies ~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolatencytop /path/to/trace --limit=3 --minsize=2 :: Checking the trace for lost events... Timerange: [2015-01-15 12:18:37.216484041, 2015-01-15 12:18:53.821580313] Top open syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:50.432950815,12:18:50.870648568] open 437697.753 N/A apache2 31517 /var/lib/php5/sess_0ifir2hangm8ggaljdphl9o5b5 (fd=13) [12:18:52.946080165,12:18:52.946132278] open 52.113 N/A apache2 31588 /var/lib/php5/sess_mr9045p1k55vin1h0vg7rhgd63 (fd=13) [12:18:46.800846035,12:18:46.800874916] open 28.881 N/A apache2 31591 /var/lib/php5/sess_r7c12pccfvjtas15g3j69u14h0 (fd=13) [12:18:51.389797604,12:18:51.389824426] open 26.822 N/A apache2 31520 /var/lib/php5/sess_4sdb1rtjkhb78sabnoj8gpbl00 (fd=13) Top read syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:37.256073107,12:18:37.256555967] read 482.860 7.00 B bash 10237 unknown (origin not found) (fd=3) [12:18:52.000209798,12:18:52.000252304] read 42.506 1.00 KB irqbalance 1337 /proc/interrupts (fd=3) [12:18:37.256559439,12:18:37.256601615] read 42.176 5.00 B bash 10237 unknown (origin not found) (fd=3) [12:18:42.000281918,12:18:42.000320016] read 38.098 1.00 KB irqbalance 1337 /proc/interrupts (fd=3) Top write syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:49.913241516,12:18:49.915908862] write 2667.346 95.00 B apache2 31584 /var/log/apache2/access.log (fd=8) [12:18:37.472823631,12:18:37.472859836] writev 36.205 21.97 KB apache2 31544 unknown (origin not found) (fd=12) [12:18:37.991578372,12:18:37.991612724] writev 34.352 21.97 KB apache2 31589 unknown (origin not found) (fd=12) [12:18:39.547778549,12:18:39.547812515] writev 33.966 21.97 KB apache2 31584 unknown (origin not found) (fd=12) Top sync syscall latencies (usec) Begin End Name Duration (usec) Size Proc PID Filename [12:18:50.162776739,12:18:51.157522361] sync 994745.622 N/A sync 22791 None (fd=None) [12:18:37.227867532,12:18:37.232289687] sync_file_range 4422.155 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) [12:18:37.238076585,12:18:37.239012027] sync_file_range 935.442 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) [12:18:37.220974711,12:18:37.221647124] sync_file_range 672.413 N/A lttng-consumerd 19964 /home/julien/lttng-traces/analysis-20150115-120942/kernel/metadata (fd=32) I/O operations log ~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-iolog /path/to/trace :: [10:58:26.221618530,10:58:26.221620659] write 2.129 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.221623609,10:58:26.221628055] read 4.446 50.00 B /usr/bin/x-term 11793 /dev/ptmx (fd=24) [10:58:26.221638929,10:58:26.221640008] write 1.079 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.221676232,10:58:26.221677385] read 1.153 8.00 B /usr/bin/x-term 11793 anon_inode:[eventfd] (fd=5) [10:58:26.223401804,10:58:26.223411683] open 9.879 N/A sleep 12420 /etc/ld.so.cache (fd=3) [10:58:26.223448060,10:58:26.223455577] open 7.517 N/A sleep 12420 /lib/x86_64-linux-gnu/libc.so.6 (fd=3) [10:58:26.223456522,10:58:26.223458898] read 2.376 832.00 B sleep 12420 /lib/x86_64-linux-gnu/libc.so.6 (fd=3) [10:58:26.223918068,10:58:26.223929316] open 11.248 N/A sleep 12420 (fd=3) [10:58:26.231881565,10:58:26.231895970] writev 14.405 16.00 B /usr/bin/x-term 11793 socket:[45650] (fd=4) [10:58:26.231979636,10:58:26.231988446] recvmsg 8.810 16.00 B Xorg 1827 socket:[47480] (fd=38) I/O usage top ~~~~~~~~~~~~~ .. code-block:: bash lttng-iousagetop /path/to/trace :: Timerange: [2014-10-07 16:36:00.733214969, 2014-10-07 16:36:18.804584183] Per-process I/O Read ############################################################################### ██████████████████████████████████████████████████ 16.00 MB lttng-consumerd (2619) 0 B file 4.00 B net 16.00 MB unknown █████ 1.72 MB lttng-consumerd (2619) 0 B file 0 B net 1.72 MB unknown █ 398.13 KB postgres (4219) 121.05 KB file 277.07 KB net 8.00 B unknown 256.09 KB postgres (1348) 0 B file 255.97 KB net 117.00 B unknown 204.81 KB postgres (4218) 204.81 KB file 0 B net 0 B unknown 123.77 KB postgres (4220) 117.50 KB file 6.26 KB net 8.00 B unknown Per-process I/O Write ############################################################################### ██████████████████████████████████████████████████ 16.00 MB lttng-consumerd (2619) 0 B file 8.00 MB net 8.00 MB unknown ██████ 2.20 MB postgres (4219) 2.00 MB file 202.23 KB net 0 B unknown █████ 1.73 MB lttng-consumerd (2619) 0 B file 887.73 KB net 882.58 KB unknown ██ 726.33 KB postgres (1165) 8.00 KB file 6.33 KB net 712.00 KB unknown 158.69 KB postgres (1168) 158.69 KB file 0 B net 0 B unknown 80.66 KB postgres (1348) 0 B file 80.66 KB net 0 B unknown Files Read ############################################################################### ██████████████████████████████████████████████████ 8.00 MB anon_inode:[lttng_stream] (lttng-consumerd) 'fd 32 in lttng-consumerd (2619)' █████ 834.41 KB base/16384/pg_internal.init 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' █ 256.09 KB socket:[8893] (postgres) 'fd 9 in postgres (1348)' █ 174.69 KB pg_stat_tmp/pgstat.stat 'fd 9 in postgres (4218)', 'fd 9 in postgres (1167)' 109.48 KB global/pg_internal.init 'fd 7 in postgres (4218)', 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' 104.30 KB base/11951/pg_internal.init 'fd 7 in postgres (4218)' 12.85 KB socket (lttng-sessiond) 'fd 30 in lttng-sessiond (384)' 4.50 KB global/pg_filenode.map 'fd 7 in postgres (4218)', 'fd 7 in postgres (4219)', 'fd 7 in postgres (4220)', 'fd 7 in postgres (4221)', 'fd 7 in postgres (4222)', 'fd 7 in postgres (4223)', 'fd 7 in postgres (4224)', 'fd 7 in postgres (4225)', 'fd 7 in postgres (4226)' 4.16 KB socket (postgres) 'fd 9 in postgres (4226)' 4.00 KB /proc/interrupts 'fd 3 in irqbalance (1104)' Files Write ############################################################################### ██████████████████████████████████████████████████ 8.00 MB socket:[56371] (lttng-consumerd) 'fd 30 in lttng-consumerd (2619)' █████████████████████████████████████████████████ 8.00 MB pipe:[53306] (lttng-consumerd) 'fd 12 in lttng-consumerd (2619)' ██████████ 1.76 MB pg_xlog/00000001000000000000000B 'fd 31 in postgres (4219)' █████ 887.82 KB socket:[56369] (lttng-consumerd) 'fd 26 in lttng-consumerd (2619)' █████ 882.58 KB pipe:[53309] (lttng-consumerd) 'fd 18 in lttng-consumerd (2619)' 160.00 KB /var/lib/postgresql/9.1/main/base/16384/16602 'fd 14 in postgres (1165)' 158.69 KB pg_stat_tmp/pgstat.tmp 'fd 3 in postgres (1168)' 144.00 KB /var/lib/postgresql/9.1/main/base/16384/16613 'fd 12 in postgres (1165)' 88.00 KB /var/lib/postgresql/9.1/main/base/16384/16609 'fd 11 in postgres (1165)' 78.28 KB socket:[8893] (postgres) 'fd 9 in postgres (1348)' Block I/O Read ############################################################################### Block I/O Write ############################################################################### ██████████████████████████████████████████████████ 1.76 MB postgres (pid=4219) ████ 160.00 KB postgres (pid=1168) ██ 100.00 KB kworker/u8:0 (pid=1540) ██ 96.00 KB jbd2/vda1-8 (pid=257) █ 40.00 KB postgres (pid=1166) 8.00 KB kworker/u9:0 (pid=4197) 4.00 KB kworker/u9:2 (pid=1381) Disk nr_sector ############################################################################### ███████████████████████████████████████████████████████████████████ 4416.00 sectors vda1 Disk nr_requests ############################################################################### ████████████████████████████████████████████████████████████████████ 177.00 requests vda1 Disk request time/sector ############################################################################### ██████████████████████████████████████████████████████████████████ 0.01 ms vda1 Network recv_bytes ############################################################################### ███████████████████████████████████████████████████████ 739.50 KB eth0 █████ 80.27 KB lo Network sent_bytes ############################################################################### ████████████████████████████████████████████████████████ 9.36 MB eth0 System calls -------- Per-TID and global system call statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-syscallstats /path/to/trace :: Timerange: [2015-01-15 12:18:37.216484041, 2015-01-15 12:18:53.821580313] Per-TID syscalls statistics (usec) find (22785) Count Min Average Max Stdev Return values - getdents 14240 0.380 364.301 43372.450 1629.390 {'success': 14240} - close 14236 0.233 0.506 4.932 0.217 {'success': 14236} - fchdir 14231 0.252 0.407 5.769 0.117 {'success': 14231} - open 7123 0.779 2.321 12.697 0.936 {'success': 7119, 'ENOENT': 4} - newfstatat 7118 1.457 143.562 28103.532 1410.281 {'success': 7118} - openat 7118 1.525 2.411 9.107 0.771 {'success': 7118} - newfstat 7117 0.272 0.654 8.707 0.248 {'success': 7117} - write 573 0.298 0.715 8.584 0.391 {'success': 573} - brk 27 0.615 5.768 30.792 7.830 {'success': 27} - rt_sigaction 22 0.227 0.283 0.589 0.098 {'success': 22} - mmap 12 1.116 2.116 3.597 0.762 {'success': 12} - mprotect 6 1.185 2.235 3.923 1.148 {'success': 6} - read 5 0.925 2.101 6.300 2.351 {'success': 5} - ioctl 4 0.342 1.151 2.280 0.873 {'success': 2, 'ENOTTY': 2} - access 4 1.166 2.530 4.202 1.527 {'ENOENT': 4} - rt_sigprocmask 3 0.325 0.570 0.979 0.357 {'success': 3} - dup2 2 0.250 0.562 0.874 ? {'success': 2} - munmap 2 3.006 5.399 7.792 ? {'success': 2} - execve 1 7277.974 7277.974 7277.974 ? {'success': 1} - setpgid 1 0.945 0.945 0.945 ? {'success': 1} - fcntl 1 ? 0.000 0.000 ? {} - newuname 1 1.240 1.240 1.240 ? {'success': 1} Total: 71847 ----------------------------------------------------------------------------------------------------------------- apache2 (31517) Count Min Average Max Stdev Return values - fcntl 192 ? 0.000 0.000 ? {} - newfstat 156 0.237 0.484 1.102 0.222 {'success': 156} - read 144 0.307 1.602 16.307 1.698 {'success': 117, 'EAGAIN': 27} - access 96 0.705 1.580 3.364 0.670 {'success': 12, 'ENOENT': 84} - newlstat 84 0.459 0.738 1.456 0.186 {'success': 63, 'ENOENT': 21} - newstat 74 0.735 2.266 11.212 1.772 {'success': 50, 'ENOENT': 24} - lseek 72 0.317 0.522 0.915 0.112 {'success': 72} - close 39 0.471 0.615 0.867 0.069 {'success': 39} - open 36 2.219 12162.689 437697.753 72948.868 {'success': 36} - getcwd 28 0.287 0.701 1.331 0.277 {'success': 28} - poll 27 1.080 1139.669 2851.163 856.723 {'success': 27} - times 24 0.765 0.956 1.327 0.107 {'success': 24} - setitimer 24 0.499 5.848 16.668 4.041 {'success': 24} - write 24 5.467 6.784 16.827 2.459 {'success': 24} - writev 24 10.241 17.645 29.817 5.116 {'success': 24} - mmap 15 3.060 3.482 4.406 0.317 {'success': 15} - munmap 15 2.944 3.502 4.154 0.427 {'success': 15} - brk 12 0.738 4.579 13.795 4.437 {'success': 12} - chdir 12 0.989 1.600 2.353 0.385 {'success': 12} - flock 6 0.906 1.282 2.043 0.423 {'success': 6} - rt_sigaction 6 0.530 0.725 1.123 0.217 {'success': 6} - pwrite64 6 1.262 1.430 1.692 0.143 {'success': 6} - rt_sigprocmask 6 0.539 0.650 0.976 0.162 {'success': 6} - shutdown 3 7.323 8.487 10.281 1.576 {'success': 3} - getsockname 3 1.015 1.228 1.585 0.311 {'success': 3} - accept4 3 5174453.611 3450157.282 5176018.235 ? {'success': 2} Total: 1131 Interrupts ---------- Hardware and software interrupt statistics ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-irqstats /path/to/trace :: Timerange: [2014-03-11 16:05:41.314824752, 2014-03-11 16:05:45.041994298] Hard IRQ Duration (us) count min avg max stdev ----------------------------------------------------------------------------------| 1: 30 10.901 45.500 64.510 18.447 | 42: 259 3.203 7.863 21.426 3.183 | 43: 2 3.859 3.976 4.093 0.165 | 44: 92 0.300 3.995 6.542 2.181 | Soft IRQ Duration (us) Raise latency (us) count min avg max stdev | count min avg max stdev ----------------------------------------------------------------------------------|------------------------------------------------------------ 1: 495 0.202 21.058 51.060 11.047 | 53 2.141 11.217 20.005 7.233 3: 14 0.133 9.177 32.774 10.483 | 14 0.763 3.703 10.902 3.448 4: 257 5.981 29.064 125.862 15.891 | 257 0.891 3.104 15.054 2.046 6: 26 0.309 1.198 1.748 0.329 | 26 9.636 39.222 51.430 11.246 7: 299 1.185 14.768 90.465 15.992 | 298 1.286 31.387 61.700 11.866 9: 338 0.592 3.387 13.745 1.356 | 147 2.480 29.299 64.453 14.286 Interrupt handler duration frequency distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: bash lttng-irqfreq --timerange=[16:05:42,16:05:45] --irq=44 --stats /path/to/trace :: Timerange: [2014-03-11 16:05:42.042034570, 2014-03-11 16:05:44.998914297] Hard IRQ Duration (us) count min avg max stdev ----------------------------------------------------------------------------------| 44: 72 0.300 4.018 6.542 2.164 | Frequency distribution iwlwifi (44) ############################################################################### 0.300 █████ 1.00 0.612 ██████████████████████████████████████████████████████████████ 12.00 0.924 ████████████████████ 4.00 1.236 ██████████ 2.00 1.548 0.00 1.861 █████ 1.00 2.173 0.00 2.485 █████ 1.00 2.797 ██████████████████████████ 5.00 3.109 █████ 1.00 3.421 ███████████████ 3.00 3.733 0.00 4.045 █████ 1.00 4.357 █████ 1.00 4.669 ██████████ 2.00 4.981 ██████████ 2.00 5.294 █████████████████████████████████████████ 8.00 5.606 ████████████████████████████████████████████████████████████████████ 13.00 5.918 ██████████████████████████████████████████████████████████████ 12.00 6.230 ███████████████ 3.00 Community ========= LTTng analyses is part of the `LTTng `_ project and shares its community. We hope you have fun trying this project and please remember it is a work in progress; feedback, bug reports and improvement ideas are always welcome! .. list-table:: LTTng analyses project's communication channels :header-rows: 1 * - Item - Location - Notes * - Mailing list - `lttng-dev `_ (``lttng-dev@lists.lttng.org``) - Preferably, use the ``[lttng-analyses]`` subject prefix * - IRC - ``#lttng`` on the OFTC network - * - Code contribution - Create a new GitHub `pull request `_ - * - Bug reporting - Create a new GitHub `issue `_ - * - Continuous integration - `lttng-analyses_master_build item `_ on LTTng's CI and `lttng/lttng-analyses project `_ on Travis CI - * - Blog - The `LTTng blog `_ contains some posts about LTTng analyses - Keywords: lttng tracing Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Intended Audience :: Developers Classifier: Intended Audience :: System Administrators Classifier: Topic :: System :: Monitoring Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python :: 3.4 lttnganalyses-0.6.1/lttnganalyses.egg-info/SOURCES.txt0000664000175000017500000000547613033742625024345 0ustar mjeansonmjeanson00000000000000ChangeLog LICENSE MANIFEST.in README.rst lttng-analyses-record lttng-cputop lttng-iolatencyfreq lttng-iolatencystats lttng-iolatencytop lttng-iolog lttng-iousagetop lttng-irqfreq lttng-irqlog lttng-irqstats lttng-memtop lttng-periodfreq lttng-periodlog lttng-periodstats lttng-periodtop lttng-schedfreq lttng-schedlog lttng-schedstats lttng-schedtop lttng-syscallstats lttng-track-process mit-license.txt requirements.txt setup.cfg setup.py test-requirements.txt tox.ini versioneer.py lttnganalyses/__init__.py lttnganalyses/_version.py lttnganalyses.egg-info/PKG-INFO lttnganalyses.egg-info/SOURCES.txt lttnganalyses.egg-info/dependency_links.txt lttnganalyses.egg-info/entry_points.txt lttnganalyses.egg-info/requires.txt lttnganalyses.egg-info/top_level.txt lttnganalyses/cli/__init__.py lttnganalyses/cli/command.py lttnganalyses/cli/cputop.py lttnganalyses/cli/io.py lttnganalyses/cli/irq.py lttnganalyses/cli/memtop.py lttnganalyses/cli/mi.py lttnganalyses/cli/period_parsing.py lttnganalyses/cli/periods.py lttnganalyses/cli/progressbar.py lttnganalyses/cli/sched.py lttnganalyses/cli/syscallstats.py lttnganalyses/cli/termgraph.py lttnganalyses/common/__init__.py lttnganalyses/common/format_utils.py lttnganalyses/common/parse_utils.py lttnganalyses/common/time_utils.py lttnganalyses/common/trace_utils.py lttnganalyses/common/version_utils.py lttnganalyses/core/__init__.py lttnganalyses/core/analysis.py lttnganalyses/core/cputop.py lttnganalyses/core/event.py lttnganalyses/core/io.py lttnganalyses/core/irq.py lttnganalyses/core/memtop.py lttnganalyses/core/period.py lttnganalyses/core/periods.py lttnganalyses/core/sched.py lttnganalyses/core/stats.py lttnganalyses/core/syscalls.py lttnganalyses/linuxautomaton/__init__.py lttnganalyses/linuxautomaton/automaton.py lttnganalyses/linuxautomaton/block.py lttnganalyses/linuxautomaton/io.py lttnganalyses/linuxautomaton/irq.py lttnganalyses/linuxautomaton/mem.py lttnganalyses/linuxautomaton/net.py lttnganalyses/linuxautomaton/sched.py lttnganalyses/linuxautomaton/sp.py lttnganalyses/linuxautomaton/statedump.py lttnganalyses/linuxautomaton/sv.py lttnganalyses/linuxautomaton/syscalls.py tests/__init__.py tests/common/__init__.py tests/common/test_format_utils.py tests/common/test_parse_utils.py tests/common/test_trace_utils.py tests/common/utils.py tests/integration/__init__.py tests/integration/analysis_test.py tests/integration/gen_ctfwriter.py tests/integration/test_cputop.py tests/integration/test_intersect.py tests/integration/test_io.py tests/integration/test_irq.py tests/integration/trace_writer.py tests/integration/expected/cputop.txt tests/integration/expected/disable_intersect.txt tests/integration/expected/iolatencytop.txt tests/integration/expected/iousagetop.txt tests/integration/expected/irqlog.txt tests/integration/expected/irqstats.txt tests/integration/expected/no_intersection.txtlttnganalyses-0.6.1/lttnganalyses.egg-info/requires.txt0000664000175000017500000000004513033742625025044 0ustar mjeansonmjeanson00000000000000pyparsing [progressbar] progressbar lttnganalyses-0.6.1/lttnganalyses.egg-info/dependency_links.txt0000664000175000017500000000000113033742625026513 0ustar mjeansonmjeanson00000000000000 lttnganalyses-0.6.1/lttnganalyses.egg-info/top_level.txt0000664000175000017500000000001613033742625025174 0ustar mjeansonmjeanson00000000000000lttnganalyses lttnganalyses-0.6.1/lttng-schedlog0000775000175000017500000000235112665072151020730 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import sched if __name__ == '__main__': sched.runlog() lttnganalyses-0.6.1/requirements.txt0000664000175000017500000000004612746731246021356 0ustar mjeansonmjeanson00000000000000pyparsing progressbar33 [progressbar] lttnganalyses-0.6.1/lttng-periodstats0000775000175000017500000000235712746220524021506 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import periods if __name__ == '__main__': periods.runstats() lttnganalyses-0.6.1/lttng-memtop0000775000175000017500000000235212553274232020442 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import memtop if __name__ == '__main__': memtop.run() lttnganalyses-0.6.1/setup.py0000775000175000017500000001326413033475105017602 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # Copyright (C) 2015 - Michael Jeanson # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. """LTTnganalyses setup script""" import shutil import sys from setuptools import setup import versioneer if sys.version_info[0:2] < (3, 4): raise RuntimeError("Python version >= 3.4 required.") if 'install' in sys.argv: if shutil.which('babeltrace') is None: print('lttnganalysescli needs the babeltrace executable.\n' 'See https://www.efficios.com/babeltrace for more info.', file=sys.stderr) sys.exit(1) try: __import__('babeltrace') except ImportError: print('lttnganalysescli needs the babeltrace python bindings.\n' 'See https://www.efficios.com/babeltrace for more info.', file=sys.stderr) sys.exit(1) def read_file(filename): """Read all contents of ``filename``.""" with open(filename, encoding='utf-8') as source: return source.read() setup( name='lttnganalyses', version=versioneer.get_version(), cmdclass=versioneer.get_cmdclass(), description='LTTng analyses', long_description=read_file('README.rst'), url='https://github.com/lttng/lttng-analyses', author='Julien Desfossez', author_email='jdesfossez@efficios.com', license='MIT', classifiers=[ 'Development Status :: 4 - Beta', 'Intended Audience :: Developers', 'Intended Audience :: System Administrators', 'Topic :: System :: Monitoring', 'License :: OSI Approved :: MIT License', 'Programming Language :: Python :: 3.4', ], keywords='lttng tracing', packages=[ 'lttnganalyses', 'lttnganalyses.common', 'lttnganalyses.core', 'lttnganalyses.cli', 'lttnganalyses.linuxautomaton' ], entry_points={ 'console_scripts': [ # human-readable output 'lttng-cputop = lttnganalyses.cli.cputop:run', 'lttng-iolatencyfreq = lttnganalyses.cli.io:runfreq', 'lttng-iolatencystats = lttnganalyses.cli.io:runstats', 'lttng-iolatencytop = lttnganalyses.cli.io:runlatencytop', 'lttng-iolog = lttnganalyses.cli.io:runlog', 'lttng-iousagetop = lttnganalyses.cli.io:runusage', 'lttng-irqfreq = lttnganalyses.cli.irq:runfreq', 'lttng-irqlog = lttnganalyses.cli.irq:runlog', 'lttng-irqstats = lttnganalyses.cli.irq:runstats', 'lttng-memtop = lttnganalyses.cli.memtop:run', 'lttng-syscallstats = lttnganalyses.cli.syscallstats:run', 'lttng-schedlog = lttnganalyses.cli.sched:runlog', 'lttng-schedtop = lttnganalyses.cli.sched:runtop', 'lttng-schedstats = lttnganalyses.cli.sched:runstats', 'lttng-schedfreq = lttnganalyses.cli.sched:runfreq', 'lttng-periodlog = lttnganalyses.cli.periods:runlog', 'lttng-periodtop = lttnganalyses.cli.periods:runtop', 'lttng-periodstats = lttnganalyses.cli.periods:runstats', 'lttng-periodfreq = lttnganalyses.cli.periods:runfreq', # MI mode 'lttng-cputop-mi = lttnganalyses.cli.cputop:run_mi', 'lttng-memtop-mi = lttnganalyses.cli.memtop:run_mi', 'lttng-syscallstats-mi = lttnganalyses.cli.syscallstats:run_mi', 'lttng-irqfreq-mi = lttnganalyses.cli.irq:runfreq_mi', 'lttng-irqlog-mi = lttnganalyses.cli.irq:runlog_mi', 'lttng-irqstats-mi = lttnganalyses.cli.irq:runstats_mi', 'lttng-iolatencyfreq-mi = lttnganalyses.cli.io:runfreq_mi', 'lttng-iolatencystats-mi = lttnganalyses.cli.io:runstats_mi', 'lttng-iolatencytop-mi = lttnganalyses.cli.io:runlatencytop_mi', 'lttng-iolog-mi = lttnganalyses.cli.io:runlog_mi', 'lttng-iousagetop-mi = lttnganalyses.cli.io:runusage_mi', 'lttng-schedlog-mi = lttnganalyses.cli.sched:runlog_mi', 'lttng-schedtop-mi = lttnganalyses.cli.sched:runtop_mi', 'lttng-schedstats-mi = lttnganalyses.cli.sched:runstats_mi', 'lttng-schedfreq-mi = lttnganalyses.cli.sched:runfreq_mi', 'lttng-periodlog-mi = lttnganalyses.cli.periods:runlog_mi', 'lttng-periodtop-mi = lttnganalyses.cli.periods:runtop_mi', 'lttng-periodstats-mi = lttnganalyses.cli.periods:runstats_mi', 'lttng-periodfreq-mi = lttnganalyses.cli.periods:runfreq_mi', ], }, scripts=[ 'lttng-analyses-record', 'lttng-track-process' ], install_requires=[ 'pyparsing', ], extras_require={ 'progressbar': ["progressbar"] }, test_suite='tests', ) lttnganalyses-0.6.1/lttng-schedtop0000775000175000017500000000235112665072151020751 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import sched if __name__ == '__main__': sched.runtop() lttnganalyses-0.6.1/mit-license.txt0000664000175000017500000000204112667420737021043 0ustar mjeansonmjeanson00000000000000Copyright (c) 2016 EfficiOS Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. lttnganalyses-0.6.1/lttng-iolatencyfreq0000775000175000017500000000234612553274232022011 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import io if __name__ == '__main__': io.runfreq() lttnganalyses-0.6.1/ChangeLog0000664000175000017500000002305012745424561017642 0ustar mjeansonmjeanson000000000000002016-07-25 LTTng analyses 0.5.4 * fix: namespace softirq events * Fix: schedfreq: append freq result tables in MI mode * Fix argument order passed to parse_trace_collection_time_range. * Fix: tests trace utils with UTC+1400 TZ 2016-06-10 LTTng analyses 0.5.3 * Add travis CI support * Doc: Add usage and debug to readme * Fix: pep8 fixes * Fix: tests when shell locale is not UTF-8 again 2016-06-07 LTTng analyses 0.5.2 * Fix: tests when shell locale is not UTF-8 * Handle events without fields * Sanitize event names when using them as variable/function name * Fix: command.py: pass params in order to parse_trace_collection_date() 2016-05-21 LTTng analyses 0.5.1 * Fix missing import 2016-05-27 LTTng analyses 0.5.0 * Stream intersect mode * MI (LAMI): versioning, progress status, more sections * Move LAMI specification to another repository * Improved testing (unit and integration) * Code cleanup and testing of common utils functions * Various bugfixes 2016-03-07 LTTng analyses 0.4.3 * Tests fixes (timezone issues) 2016-03-01 LTTng analyses 0.4.2 * Packaging fixes 2016-02-29 LTTng analyses 0.4.1 * Packaging fixes 2016-02-26 LTTng analyses 0.4.0 * Scheduler latency analyses * Priority fields in CPU and latency analyses * Machine Interface (json) output * Period-based analyses (begin and end events) * Refactoring/Bugfixes/Cleanup * Basic testing infrastructure 2015-07-13 LTTng analyses 0.3.0 * Merge pull request #23 from mjeanson/master * Convert README to reStructuredText * Fix pep8 errors * Refactor in a single package with subpackages * fix: stats with 0 requests * Check for babeltrace python package on installation * Define version once per package only * Add ChangeLog file 2015-04-20 LTTng analyses 0.2.0 * Merge pull request #22 from abusque/refactor-syscallstats * Bump version to 0.2 * Refactor syscallstats script to use new analysis backend * Rename min/max attributes to avoid collision with built-ins * Merge pull request #21 from abusque/decouple-io * Implement check for --end argument before start of trace * Style: fix indentation in _get_io_requests * Fix: set pid correctly on FileStats init * Fix typo in _fix_context_pid * Fix: use TID instead of PID in file stats if PID is None * Refactor io latency freq output * Lint: remove unused import, fix 'dangerous' default args * Refactor io top and log views * Remove deprecated --extra argument * Fix: correct typo and existence test in fd getter * Fix: correct typo in ns_to_hour_nsec output * Style: fix pylint/pep8 style issues * Replace map() by list comprehension in disk latency stats * Refactor IO Latency stats output methods * Add generators to iterate over io requests * Add method to compare equivalent io operations * Fix: properly handle empty filters for IO file stats * Fix FileStats reset() function * Move _filter_process method to base command class * Make _arg_pid_list list of ints instead of strings * Refactor iotop per file analysis and output * Refactor iotop output methods * Add _print_ascii_graph method to simplify output of graphs * Rename filter predicates to indicate visibility * Remove deprecated breakcb in IO command * Remove unused _compute_stats method from commands * Rename IO command for consistency with other commands * Track FDs chronologically in IO analysis * Add timestamp to create/close FD notifications * Remove dead code from IO cli * Reset FD in IO Analysis * Add support for pwrite* and pread* I/O syscalls * Implement syscall I/O analysis * Move returned_size attribute from SyscallIORequest into ReadWriteIORequest * Send create process and fd notification on statedump events * Send fd create and close notifications on sched events * Fix: send create_fd notification for open io requests * Add OP_READ_WRITE IO operation type for syscalls which both read and write * Use a single method to track io request exits * Refactor/rewrite IO state provider * Refactor syscall analysis to use new SyscallEvent class * Refactor NetStateProvider to use new SyscallEvent and io rq objects * Refactor MemStateProvider to use new SyscallEvent and io rq objects * Remove pending_syscalls array from State class * Refactor statedump provider to track only state and not analysis related attributes * Don't set deprecated parent_pid on FD object * Use SyscallEvent objects in syscall state provider * Remove Syscalls_stats class * Remove analysis related attributes from FD class, add factory to create from open rq * Add get_fd_type method to retrieve fd type from syscall name * Add more IORequest classes, and io_rq attr to SyscallEvent * Set SyscallEvent name using get_syscall_name method * Remove analysis related attributes from Process state class * Add more dup open syscalls, remove generic filenames from SyscallConsts * Fix get_syscall_name string indexing * Move IO syscalls handling into separate provider * Strip prefixes from syscall names for brevity * Merge branch 'master' into decouple-io * Merge pull request #20 from abusque/linting * Rename state to _state in providers for consistency * Rename irq start/stop timestamps to begin/end for consistency * Refactor IO Requests mechanism and (block I/O) analysis * Track network usage in IO analysis * Separate syscalls and io analyses * Use del instead of pop when possible with fds and remove unused attributes * Move date args processing to command, more linting * Linting: rename p* to pattern * Linting of common.py and related code * Fix: make the regex strings raw strings * fix for unknown pid in io.py * Fix syscallstats command description method names * Add IO analysis separate from syscalls * Merge pull request #19 from jdesfossez/dev * Fix: process the sched_switch for the swapper * Fix: handle the case of missing PID * Merge pull request #18 from abusque/decouple-cputop * Revert accidental partial commit of syscalls.py * Fix: remove deprecated last_sched attribute from Process class * Fix: remove deprecated cpu_ns attribute from Process class * Refactor cputop cli to work with new analysis module * Implement cputop analysis module * Fix: assign boolean instead of integer values for CLOEXEC * Add class method to duplicate FD objects * Remove non-state related attributes from process and cpu classes * Refactor sched state provider to track current state only * Remove deprecated perf context tracking in sched * Fix: set cloexec on fd from flags on statedump * remove old code (pre 0.1) that was kept as reference for the refactoring * Merge pull request #17 from abusque/decouple-memtop * Minor: fix pep8 style issues * Decouple mem analysis from current state * Rename notification callback methods to reflect public accessibility * Add print date method to base command class * Add reset method to Analysis classes * Merge pull request #16 from abusque/decouple-modules * Style: correct pep8 errors * Fix: set cpu id in constructor * Minor: add comment in irq state provider to clarify execptional softirq creation * Style: rename method in memtop for consistency * Fix tracking of softirq_raises and corresponding entries * Fix: don't print raise_ts multiple times in irq log * Simplify irq cli args transform * Refactor IrqAnalysisCommand to work with rewritten analysis * Add reset method to IrqStats * Keep irq list by id and count irq raises * Simplify filter_irq function in CLI * Track CPU id in interrupt objects * Rename irq analysis cli module to IrqAnalysisCommand to avoid ambiguity * Implement filtering by duration for IrqAnalysis * Update copyright info for modified files * Implement initial IrqStats system for analysis * fix: title * new tool to filter a trace based on TID/Procname with follow-child support * Style: replace double quotes by single quotes in lttnganalysescli * Style: replace double quotes by single quotes in lttnganalyses * Style: replace double quotes by single quotes in linuxautomaton * Implement notification for communication from automaton to analyses * Remove superfluous clear_screen string in irq _print_stats * Refactor IRQ state provider and related classes * Remove unused final argument in _print_results in cli * Fix: don't count freed pages twice in memtop, reorganize printing code * Fix: display unkwown for pname/pid in block read/write when we don't have the info * Fix: check that current_tid is not None instead of -1 * Initialize self.state in Command module when creating automaton * Pythonify tests for empty or uninitialized structures and arguments * Use None instead of -1 or 0 for default argument values * Add callback registration to analysis module * Replace usage of -1 as default/invalid value by None * Clean-up mem and sched state providers and related modules. * Replace integer logic by boolean value * fix: missing sync in i/o syscalls list * handle sys_accept4 * Merge pull request #15 from abusque/deduplication * Clean-up: dead code removal in linuxautomaton modules * Remove deprecated ret_strings from syscalls.py * Merge pull request #14 from abusque/email-fix * Fix: correct typo in author email address * Remove redundant IOCategory code * Merge pull request #13 from abusque/chrono_fds * Move track chrono fd code into method of Process class * Track files from statedump in chrono_fds * Fix: use event.timestamp instead of event[timestamp_begin] * Track files opened before start of trace in chrono_fds * Track chronological fd metadata * fix override syscall name * test override syscall name for epoll_ctl * show tid value * fix: handle unknown syscall return codes * fix: handle unknown syscall return codes * don't fail if some events are not available lttnganalyses-0.6.1/lttng-iolog0000775000175000017500000000234512665072151020254 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import io if __name__ == '__main__': io.runlog() lttnganalyses-0.6.1/setup.cfg0000664000175000017500000000036013033742625017703 0ustar mjeansonmjeanson00000000000000[versioneer] vcs = git style = pep440 versionfile_source = lttnganalyses/_version.py versionfile_build = lttnganalyses/_version.py tag_prefix = v parentdir_prefix = lttnganalyses- [egg_info] tag_svn_revision = 0 tag_date = 0 tag_build = lttnganalyses-0.6.1/tox.ini0000664000175000017500000000240112747717732017407 0ustar mjeansonmjeanson00000000000000[tox] minversion = 1.9 envlist = py3,pep8 skipsdist = True toxworkdir = {env:TOXWORKDIR:.tox} [testenv] skip_install = True sitepackages = True setenv = PYTHONPATH = {env:PYTHONPATH:} deps = -r{toxinidir}/requirements.txt -r{toxinidir}/test-requirements.txt commands = py.test --cov-config .coveragerc --cov=lttnganalyses --basetemp={envtmpdir} tests {posargs} [testenv:noutf8] setenv = LC_ALL=C PYTHONPATH = {env:PYTHONPATH:} commands = py.test --cov-config .coveragerc --cov=lttnganalyses --basetemp={envtmpdir} tests {posargs} [testenv:pep8] commands = flake8 --ignore=E123,E125 [testenv:longregression] commands = py.test --cov-config .coveragerc --cov=lttnganalyses --basetemp={envtmpdir} tests_long_regression {posargs} [flake8] # E123, E125 skipped as they are invalid PEP-8. show-source = True ignore = E123,E125 builtins = _ exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,versioneer.py,lttnganalyses/_version.py,tests/__init__.py [testenv:pylint-errors] deps = pylint >= 1.6 commands = pylint -f colorized -E lttnganalyses [testenv:pylint-warnings] deps = pylint >= 1.6 commands = pylint -f colorized -d all -e W -r n lttnganalyses [testenv:pylint-full] deps = pylint >= 1.6 commands = pylint -f colorized --disable=all -e R,E,W lttnganalyses lttnganalyses-0.6.1/lttng-iolatencystats0000775000175000017500000000234712553274232022213 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import io if __name__ == '__main__': io.runstats() lttnganalyses-0.6.1/lttng-syscallstats0000775000175000017500000000236612553274232021677 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import syscallstats if __name__ == '__main__': syscallstats.run() lttnganalyses-0.6.1/lttng-irqlog0000775000175000017500000000234712553274232020442 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import irq if __name__ == '__main__': irq.runlog() lttnganalyses-0.6.1/lttng-track-process0000775000175000017500000175470312723101501021723 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # Follow the execution of one or more processes throughout a LTTng trace and # print a textual output similar to Babeltrace. # When using the --procname option, the program tries to find the associated # TID as soon as possible. # The "follow-child" option only works for children started with fork after the # beginning of the trace. # # When invoked without filtering arguments, all the events are displayed and an # additionnal field at the beginning of the line shows the current TID allowing # to easily grep/search in the text dump. # # To handle more events (including UST events), follow the comments below, most # of this file has been autogenerated with parser_generator.py # # Note: unbelievably slow (140x slower than babeltrace), blame python and a lot # of string comparisons, but still much faster than a brain and eyes. # # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. import sys import time import argparse NSEC_PER_SEC = 1000000000 try: from babeltrace import TraceCollection except ImportError: # quick fix for debian-based distros sys.path.append("/usr/local/lib/python%d.%d/site-packages" % (sys.version_info.major, sys.version_info.minor)) from babeltrace import TraceCollection class TraceParser: def __init__(self, trace, arg_proc_list, arg_tid_list, arg_follow_child): self.trace = trace self.event_count = {} self.arg_proc_list = arg_proc_list self.arg_tid_list = arg_tid_list self.arg_follow_child = arg_follow_child self.per_cpu_current = {} def ns_to_hour_nsec(self, ns): d = time.localtime(ns/NSEC_PER_SEC) return "%02d:%02d:%02d.%09d" % (d.tm_hour, d.tm_min, d.tm_sec, ns % NSEC_PER_SEC) def check_procname(self, name, tid): if self.arg_proc_list is None: return if name in self.arg_proc_list: if self.arg_tid_list is None: self.arg_tid_list = [] if not tid in self.arg_tid_list: self.arg_tid_list.append(int(tid)) def tid_check(self, tid): if self.arg_tid_list is not None and tid in self.arg_tid_list: return True return False def filter_event(self, event): # no filtering if self.arg_tid_list is None and self.arg_proc_list is None: return True # we don't know yet the PID we are interested in (match procname - pid) if self.arg_tid_list is None: return False cpu_id = event["cpu_id"] if not cpu_id in self.per_cpu_current.keys(): return False return self.tid_check(self.per_cpu_current[cpu_id]) def get_tid_str(self, event): cpu_id = event["cpu_id"] if not cpu_id in self.per_cpu_current.keys(): tid = "?" else: tid = self.per_cpu_current[cpu_id] return "[{:>6}]".format(tid) def print_filter(self, event, string): if event.name.startswith("lttng_statedump"): if "tid" in event.keys(): if not self.tid_check(event["tid"]): return elif "pid" in event.keys(): if not self.tid_check(event["pid"]): return else: return elif not self.filter_event(event): return print(self.get_tid_str(event), string) def handle_special_events(self, event): # events that need some mangling/processing cpu_id = event["cpu_id"] if event.name == "sched_switch": timestamp = event.timestamp cpu_id = event["cpu_id"] prev_comm = event["prev_comm"] prev_tid = event["prev_tid"] prev_prio = event["prev_prio"] prev_state = event["prev_state"] next_comm = event["next_comm"] next_tid = event["next_tid"] next_prio = event["next_prio"] if cpu_id not in self.per_cpu_current.keys(): self.per_cpu_current[cpu_id] = next_tid else: self.per_cpu_current[cpu_id] = next_tid # we want to see the scheduling out if self.tid_check(prev_tid): print(self.get_tid_str(event), "[%s] %s: { cpu_id = %s }, { prev_comm = " "%s, prev_tid = %s, prev_prio = %s, prev_state = %s, " "next_comm = %s, next_tid = %s, next_prio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, prev_comm, prev_tid, prev_prio, prev_state, next_comm, next_tid, next_prio,)) elif event.name == "sched_process_exec": tid = event["tid"] filename = event["filename"] name = filename.split("/")[-1] self.check_procname(name, tid) elif event.name == "syscall_entry_execve": if not cpu_id in self.per_cpu_current.keys(): return tid = self.per_cpu_current[cpu_id] filename = event["filename"] name = filename.split("/")[-1] self.check_procname(name, tid) elif event.name == "sched_process_fork" and self.arg_follow_child: pt = event["parent_tid"] pc = event["parent_comm"] ct = event["child_tid"] cc = event["child_comm"] if self.tid_check(pt) and ct not in self.arg_tid_list: self.arg_tid_list.append(ct) if self.arg_proc_list is not None and pc in self.arg_proc_list: self.arg_tid_list.append(ct) def parse(self): # iterate over all the events for event in self.trace.events: method_name = "handle_%s" % event.name.replace(":", "_").replace("+", "_") # call the function to handle each event individually if "comm" in event.keys() and "tid" in event.keys(): self.check_procname(event["comm"], event["tid"]) elif "name" in event.keys() and "tid" in event.keys(): self.check_procname(event["name"], event["tid"]) elif "next_comm" in event.keys() and "next_tid" in event.keys(): self.check_procname(event["next_comm"], event["next_tid"]) elif "prev_comm" in event.keys() and "prev_tid" in event.keys(): self.check_procname(event["prev_comm"], event["prev_tid"]) elif "parent_comm" in event.keys() and "parent_tid" in event.keys(): self.check_procname(event["parent_comm"], event["parent_tid"]) elif "child_comm" in event.keys() and "child_tid" in event.keys(): self.check_procname(event["child_comm"], event["child_tid"]) self.handle_special_events(event) if hasattr(TraceParser, method_name): func = getattr(TraceParser, method_name) func(self, event) # everything between here and the end of the class has been generated # with parser_generator.py on a trace with all kernel events enabled # and transformed with: # :%s/print("\[%s\]/self.print_filter(event, "[%s]/g # :%s/self.event_count\[event.name\] += 1\n// # :%s/ self.print_filter/ self.print_filter/g def handle_compat_syscall_exit_setns(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sendmmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_syncfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_clock_adjtime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] utx = event["utx"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, utx = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, utx,)) def handle_compat_syscall_exit_prlimit64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] old_rlim = event["old_rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, old_rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, old_rlim,)) def handle_compat_syscall_exit_fanotify_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_recvmmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] mmsg = event["mmsg"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, mmsg = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, mmsg, timeout,)) def handle_compat_syscall_exit_perf_event_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_rt_tgsigqueueinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_pwritev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_preadv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, vec,)) def handle_compat_syscall_exit_inotify_init1(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_pipe2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fildes = event["fildes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fildes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fildes,)) def handle_compat_syscall_exit_dup3(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_epoll_create1(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_eventfd2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_signalfd4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_timerfd_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] otmr = event["otmr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, otmr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, otmr,)) def handle_compat_syscall_exit_timerfd_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] otmr = event["otmr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, otmr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, otmr,)) def handle_compat_syscall_exit_eventfd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_timerfd_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_signalfd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_utimensat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_epoll_pwait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] events = event["events"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, events = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, events,)) def handle_compat_syscall_exit_getcpu(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] cpup = event["cpup"] nodep = event["nodep"] tcache = event["tcache"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, cpup = %s, nodep = %s, tcache = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, cpup, nodep, tcache,)) def handle_compat_syscall_exit_vmsplice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_tee(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_splice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_get_robust_list(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] head_ptr = event["head_ptr"] len_ptr = event["len_ptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, head_ptr = %s, len_ptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, head_ptr, len_ptr,)) def handle_compat_syscall_exit_set_robust_list(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_unshare(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_ppoll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ufds = event["ufds"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ufds = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ufds,)) def handle_compat_syscall_exit_pselect6(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] inp = event["inp"] outp = event["outp"] exp = event["exp"] tsp = event["tsp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, inp = %s, outp = %s, exp = %s, tsp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, inp, outp, exp, tsp,)) def handle_compat_syscall_exit_faccessat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fchmodat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_readlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_compat_syscall_exit_symlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_linkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_renameat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_unlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fstatat64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] dfd = event["dfd"] filename = event["filename"] statbuf = event["statbuf"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, dfd = %s, filename = %s, statbuf = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, dfd, filename, statbuf, flag,)) def handle_compat_syscall_exit_futimesat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fchownat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mknodat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mkdirat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_openat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_inotify_rm_watch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_inotify_add_watch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_inotify_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_ioprio_get(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_ioprio_set(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_keyctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg2 = event["arg2"] arg3 = event["arg3"] arg4 = event["arg4"] arg5 = event["arg5"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg2 = %s, arg3 = %s, arg4 = %s, arg5 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg2, arg3, arg4, arg5,)) def handle_compat_syscall_exit_request_key(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_add_key(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_waitid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] infop = event["infop"] ru = event["ru"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, infop = %s, ru = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, infop, ru,)) def handle_compat_syscall_exit_kexec_load(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mq_getsetattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] u_omqstat = event["u_omqstat"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, u_omqstat = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, u_omqstat,)) def handle_compat_syscall_exit_mq_notify(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mq_timedreceive(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] u_msg_ptr = event["u_msg_ptr"] u_msg_prio = event["u_msg_prio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, u_msg_ptr = %s, u_msg_prio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, u_msg_ptr, u_msg_prio,)) def handle_compat_syscall_exit_mq_timedsend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mq_unlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mq_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_utimes(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_tgkill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fstatfs64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fd = event["fd"] sz = event["sz"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fd = %s, sz = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fd, sz, buf,)) def handle_compat_syscall_exit_statfs64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] pathname = event["pathname"] sz = event["sz"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, pathname = %s, sz = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, pathname, sz, buf,)) def handle_compat_syscall_exit_clock_nanosleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rmtp = event["rmtp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rmtp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rmtp,)) def handle_compat_syscall_exit_clock_getres(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tp = event["tp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tp,)) def handle_compat_syscall_exit_clock_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tp = event["tp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tp,)) def handle_compat_syscall_exit_clock_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_timer_delete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_timer_getoverrun(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_timer_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] setting = event["setting"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, setting = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, setting,)) def handle_compat_syscall_exit_timer_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] old_setting = event["old_setting"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, old_setting = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, old_setting,)) def handle_compat_syscall_exit_timer_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] created_timer_id = event["created_timer_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, created_timer_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, created_timer_id,)) def handle_compat_syscall_exit_set_tid_address(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_remap_file_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_epoll_wait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] events = event["events"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, events = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, events,)) def handle_compat_syscall_exit_epoll_ctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_epoll_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_exit_group(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_io_cancel(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] result = event["result"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, result = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, result,)) def handle_compat_syscall_exit_io_submit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_io_getevents(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] events = event["events"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, events = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, events, timeout,)) def handle_compat_syscall_exit_io_destroy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_io_setup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sched_getaffinity(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] user_mask_ptr = event["user_mask_ptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, user_mask_ptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, user_mask_ptr,)) def handle_compat_syscall_exit_sched_setaffinity(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_futex(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] uaddr = event["uaddr"] uaddr2 = event["uaddr2"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, uaddr = %s, uaddr2 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, uaddr, uaddr2,)) def handle_compat_syscall_exit_sendfile64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] offset = event["offset"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, offset = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, offset,)) def handle_compat_syscall_exit_tkill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fremovexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_lremovexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_removexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_flistxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] list = event["list"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, list = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, list,)) def handle_compat_syscall_exit_llistxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] list = event["list"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, list = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, list,)) def handle_compat_syscall_exit_listxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] list = event["list"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, list = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, list,)) def handle_compat_syscall_exit_fgetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, value,)) def handle_compat_syscall_exit_lgetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, value,)) def handle_compat_syscall_exit_getxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, value,)) def handle_compat_syscall_exit_fsetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_lsetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_gettid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fcntl64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fd = event["fd"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fd = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fd, cmd, arg,)) def handle_compat_syscall_exit_getdents64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] dirent = event["dirent"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, dirent = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, dirent,)) def handle_compat_syscall_exit_madvise(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mincore(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, vec,)) def handle_compat_syscall_exit_pivot_root(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setfsgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setfsuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_chown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getresgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rgid = event["rgid"] egid = event["egid"] sgid = event["sgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rgid = %s, egid = %s, sgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rgid, egid, sgid,)) def handle_compat_syscall_exit_setresgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getresuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ruid = event["ruid"] euid = event["euid"] suid = event["suid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ruid = %s, euid = %s, suid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ruid, euid, suid,)) def handle_compat_syscall_exit_setresuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fchown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setgroups(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getgroups(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] grouplist = event["grouplist"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, grouplist = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, grouplist,)) def handle_compat_syscall_exit_setregid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setreuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getegid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_geteuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_lchown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fstat64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fd = event["fd"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fd = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fd, statbuf,)) def handle_compat_syscall_exit_lstat64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] filename = event["filename"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, filename = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, filename, statbuf,)) def handle_compat_syscall_exit_stat64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] filename = event["filename"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, filename = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, filename, statbuf,)) def handle_compat_syscall_exit_mmap_pgoff(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] addr = event["addr"] len = event["len"] prot = event["prot"] flags = event["flags"] fd = event["fd"] pgoff = event["pgoff"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, addr = %s, len = %s, prot = %s, flags = %s, fd = %s, pgoff = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, addr, len, prot, flags, fd, pgoff,)) def handle_compat_syscall_exit_getrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rlim = event["rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rlim,)) def handle_compat_syscall_exit_sendfile(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] out_fd = event["out_fd"] in_fd = event["in_fd"] offset = event["offset"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, out_fd = %s, in_fd = %s, offset = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, out_fd, in_fd, offset, count,)) def handle_compat_syscall_exit_getcwd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_compat_syscall_exit_chown16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] filename = event["filename"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, filename = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, filename, user, group,)) def handle_compat_syscall_exit_rt_sigsuspend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_rt_sigqueueinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_rt_sigtimedwait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] uthese = event["uthese"] uinfo = event["uinfo"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, uthese = %s, uinfo = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, uthese, uinfo,)) def handle_compat_syscall_exit_rt_sigpending(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] set = event["set"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, set = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, set,)) def handle_compat_syscall_exit_rt_sigprocmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] oset = event["oset"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, oset = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, oset,)) def handle_compat_syscall_exit_rt_sigaction(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] oact = event["oact"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, oact = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, oact,)) def handle_compat_syscall_exit_prctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg2 = event["arg2"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg2 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg2,)) def handle_compat_syscall_exit_getresgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rgid = event["rgid"] egid = event["egid"] sgid = event["sgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rgid = %s, egid = %s, sgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rgid, egid, sgid,)) def handle_compat_syscall_exit_setresgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rgid = event["rgid"] egid = event["egid"] sgid = event["sgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rgid = %s, egid = %s, sgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rgid, egid, sgid,)) def handle_compat_syscall_exit_poll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ufds = event["ufds"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ufds = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ufds,)) def handle_compat_syscall_exit_getresuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ruid = event["ruid"] euid = event["euid"] suid = event["suid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ruid = %s, euid = %s, suid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ruid, euid, suid,)) def handle_compat_syscall_exit_setresuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ruid = event["ruid"] euid = event["euid"] suid = event["suid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ruid = %s, euid = %s, suid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ruid, euid, suid,)) def handle_compat_syscall_exit_mremap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_nanosleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rmtp = event["rmtp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rmtp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rmtp,)) def handle_compat_syscall_exit_sched_rr_get_interval(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] interval = event["interval"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, interval = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, interval,)) def handle_compat_syscall_exit_sched_get_priority_min(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sched_get_priority_max(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sched_yield(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sched_getscheduler(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sched_setscheduler(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sched_getparam(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] param = event["param"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, param = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, param,)) def handle_compat_syscall_exit_sched_setparam(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_munlockall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mlockall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_munlock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mlock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sysctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, args,)) def handle_compat_syscall_exit_fdatasync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getsid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_writev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, vec,)) def handle_compat_syscall_exit_readv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, vec,)) def handle_compat_syscall_exit_msync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_flock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_select(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] inp = event["inp"] outp = event["outp"] exp = event["exp"] tvp = event["tvp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, inp = %s, outp = %s, exp = %s, tvp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, inp, outp, exp, tvp,)) def handle_compat_syscall_exit_getdents(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] dirent = event["dirent"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, dirent = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, dirent,)) def handle_compat_syscall_exit_llseek(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fd = event["fd"] offset_high = event["offset_high"] offset_low = event["offset_low"] result = event["result"] origin = event["origin"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fd = %s, offset_high = %s, offset_low = %s, result = %s, origin = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fd, offset_high, offset_low, result, origin,)) def handle_compat_syscall_exit_setfsgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] gid = event["gid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, gid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, gid,)) def handle_compat_syscall_exit_setfsuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] uid = event["uid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, uid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, uid,)) def handle_compat_syscall_exit_personality(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sysfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_bdflush(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] func = event["func"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, func = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, func, data,)) def handle_compat_syscall_exit_fchdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getpgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_quotactl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] addr = event["addr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, addr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, addr,)) def handle_compat_syscall_exit_delete_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_init_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sigprocmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] how = event["how"] nset = event["nset"] oset = event["oset"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, how = %s, nset = %s, oset = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, how, nset, oset,)) def handle_compat_syscall_exit_mprotect(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_adjtimex(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] txc_p = event["txc_p"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, txc_p = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, txc_p,)) def handle_compat_syscall_exit_newuname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, name,)) def handle_compat_syscall_exit_setdomainname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_clone(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fsync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_ipc(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] call = event["call"] first = event["first"] second = event["second"] third = event["third"] ptr = event["ptr"] fifth = event["fifth"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, call = %s, first = %s, second = %s, third = %s, ptr = %s, fifth = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, call, first, second, third, ptr, fifth,)) def handle_compat_syscall_exit_sysinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] info = event["info"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, info = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, info,)) def handle_compat_syscall_exit_swapoff(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_wait4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] stat_addr = event["stat_addr"] ru = event["ru"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, stat_addr = %s, ru = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, stat_addr, ru,)) def handle_compat_syscall_exit_vhangup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_uname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, name,)) def handle_compat_syscall_exit_newfstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, statbuf,)) def handle_compat_syscall_exit_newlstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, statbuf,)) def handle_compat_syscall_exit_newstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, statbuf,)) def handle_compat_syscall_exit_getitimer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, value,)) def handle_compat_syscall_exit_setitimer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ovalue = event["ovalue"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ovalue = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ovalue,)) def handle_compat_syscall_exit_syslog(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_compat_syscall_exit_socketcall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] call = event["call"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, call = %s, args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, call, args,)) def handle_compat_syscall_exit_fstatfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_compat_syscall_exit_statfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_compat_syscall_exit_setpriority(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getpriority(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fchown16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fd = event["fd"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fd = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fd, user, group,)) def handle_compat_syscall_exit_fchmod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_ftruncate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_truncate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_munmap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_old_mmap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg,)) def handle_compat_syscall_exit_old_readdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fd = event["fd"] dirent = event["dirent"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fd = %s, dirent = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fd, dirent, count,)) def handle_compat_syscall_exit_reboot(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_swapon(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_uselib(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] library = event["library"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, library = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, library,)) def handle_compat_syscall_exit_readlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_compat_syscall_exit_lstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] filename = event["filename"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, filename = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, filename, statbuf,)) def handle_compat_syscall_exit_symlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_old_select(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg,)) def handle_compat_syscall_exit_setgroups16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] gidsetsize = event["gidsetsize"] grouplist = event["grouplist"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, gidsetsize = %s, grouplist = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, gidsetsize, grouplist,)) def handle_compat_syscall_exit_getgroups16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] gidsetsize = event["gidsetsize"] grouplist = event["grouplist"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, gidsetsize = %s, grouplist = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, gidsetsize, grouplist,)) def handle_compat_syscall_exit_settimeofday(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_gettimeofday(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tv = event["tv"] tz = event["tz"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tv = %s, tz = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tv, tz,)) def handle_compat_syscall_exit_getrusage(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ru = event["ru"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ru = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ru,)) def handle_compat_syscall_exit_old_getrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] resource = event["resource"] rlim = event["rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, resource = %s, rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, resource, rlim,)) def handle_compat_syscall_exit_setrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sethostname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sigpending(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] set = event["set"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, set = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, set,)) def handle_compat_syscall_exit_setregid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rgid = event["rgid"] egid = event["egid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rgid = %s, egid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rgid, egid,)) def handle_compat_syscall_exit_setreuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ruid = event["ruid"] euid = event["euid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ruid = %s, euid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ruid, euid,)) def handle_compat_syscall_exit_ssetmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] newmask = event["newmask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, newmask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, newmask,)) def handle_compat_syscall_exit_sgetmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setsid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getpgrp(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getppid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_dup2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_ustat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ubuf = event["ubuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ubuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ubuf,)) def handle_compat_syscall_exit_chroot(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_umask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_olduname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, name,)) def handle_compat_syscall_exit_setpgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fcntl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg,)) def handle_compat_syscall_exit_ioctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg,)) def handle_compat_syscall_exit_umount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_acct(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getegid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_geteuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_signal(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] sig = event["sig"] handler = event["handler"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, sig = %s, handler = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, sig, handler,)) def handle_compat_syscall_exit_getgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] gid = event["gid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, gid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, gid,)) def handle_compat_syscall_exit_brk(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_times(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tbuf = event["tbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tbuf,)) def handle_compat_syscall_exit_pipe(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fildes = event["fildes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fildes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fildes,)) def handle_compat_syscall_exit_dup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_rmdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mkdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_rename(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_kill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_sync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_nice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] increment = event["increment"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, increment = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, increment,)) def handle_compat_syscall_exit_access(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_utime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_pause(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_fstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fd = event["fd"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fd = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fd, statbuf,)) def handle_compat_syscall_exit_alarm(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_ptrace(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] addr = event["addr"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, addr = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, addr, data,)) def handle_compat_syscall_exit_stime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tptr = event["tptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tptr,)) def handle_compat_syscall_exit_getuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_setuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] uid = event["uid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, uid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, uid,)) def handle_compat_syscall_exit_oldumount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, name,)) def handle_compat_syscall_exit_mount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_getpid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_lseek(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_stat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] filename = event["filename"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, filename = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, filename, statbuf,)) def handle_compat_syscall_exit_lchown16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] filename = event["filename"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, filename = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, filename, user, group,)) def handle_compat_syscall_exit_chmod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_mknod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_time(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tloc = event["tloc"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tloc = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tloc,)) def handle_compat_syscall_exit_chdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_execve(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_unlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_link(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_creat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_waitpid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] pid = event["pid"] stat_addr = event["stat_addr"] options = event["options"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, pid = %s, stat_addr = %s, options = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, pid, stat_addr, options,)) def handle_compat_syscall_exit_close(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_write(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_read(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_compat_syscall_exit_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_exit_restart_syscall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_compat_syscall_entry_setns(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] nstype = event["nstype"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, nstype = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, nstype,)) def handle_compat_syscall_entry_sendmmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] mmsg = event["mmsg"] vlen = event["vlen"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, mmsg = %s, vlen = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, mmsg, vlen, flags,)) def handle_compat_syscall_entry_syncfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_compat_syscall_entry_clock_adjtime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] utx = event["utx"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s, utx = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock, utx,)) def handle_compat_syscall_entry_prlimit64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] resource = event["resource"] new_rlim = event["new_rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, resource = %s, new_rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, resource, new_rlim,)) def handle_compat_syscall_entry_fanotify_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] event_f_flags = event["event_f_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s, event_f_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags, event_f_flags,)) def handle_compat_syscall_entry_recvmmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vlen = event["vlen"] flags = event["flags"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vlen = %s, flags = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vlen, flags, timeout,)) def handle_compat_syscall_entry_perf_event_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] attr_uptr = event["attr_uptr"] pid = event["pid"] cpu = event["cpu"] group_fd = event["group_fd"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { attr_uptr = %s, pid = %s, cpu = %s, group_fd = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, attr_uptr, pid, cpu, group_fd, flags,)) def handle_compat_syscall_entry_rt_tgsigqueueinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tgid = event["tgid"] pid = event["pid"] sig = event["sig"] uinfo = event["uinfo"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tgid = %s, pid = %s, sig = %s, uinfo = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tgid, pid, sig, uinfo,)) def handle_compat_syscall_entry_pwritev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vec = event["vec"] vlen = event["vlen"] pos_l = event["pos_l"] pos_h = event["pos_h"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vec = %s, vlen = %s, pos_l = %s, pos_h = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vec, vlen, pos_l, pos_h,)) def handle_compat_syscall_entry_preadv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vlen = event["vlen"] pos_l = event["pos_l"] pos_h = event["pos_h"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vlen = %s, pos_l = %s, pos_h = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vlen, pos_l, pos_h,)) def handle_compat_syscall_entry_inotify_init1(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags,)) def handle_compat_syscall_entry_pipe2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags,)) def handle_compat_syscall_entry_dup3(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldfd = event["oldfd"] newfd = event["newfd"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldfd = %s, newfd = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldfd, newfd, flags,)) def handle_compat_syscall_entry_epoll_create1(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags,)) def handle_compat_syscall_entry_eventfd2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] count = event["count"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { count = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, count, flags,)) def handle_compat_syscall_entry_signalfd4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufd = event["ufd"] user_mask = event["user_mask"] sizemask = event["sizemask"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufd = %s, user_mask = %s, sizemask = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufd, user_mask, sizemask, flags,)) def handle_compat_syscall_entry_timerfd_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufd = event["ufd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufd,)) def handle_compat_syscall_entry_timerfd_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufd = event["ufd"] flags = event["flags"] utmr = event["utmr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufd = %s, flags = %s, utmr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufd, flags, utmr,)) def handle_compat_syscall_entry_eventfd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, count,)) def handle_compat_syscall_entry_timerfd_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clockid = event["clockid"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clockid = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clockid, flags,)) def handle_compat_syscall_entry_signalfd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufd = event["ufd"] user_mask = event["user_mask"] sizemask = event["sizemask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufd = %s, user_mask = %s, sizemask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufd, user_mask, sizemask,)) def handle_compat_syscall_entry_utimensat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] utimes = event["utimes"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, utimes = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, utimes, flags,)) def handle_compat_syscall_entry_epoll_pwait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] epfd = event["epfd"] maxevents = event["maxevents"] timeout = event["timeout"] sigmask = event["sigmask"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { epfd = %s, maxevents = %s, timeout = %s, sigmask = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, epfd, maxevents, timeout, sigmask, sigsetsize,)) def handle_compat_syscall_entry_getcpu(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tcache = event["tcache"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tcache = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tcache,)) def handle_compat_syscall_entry_vmsplice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] iov = event["iov"] nr_segs = event["nr_segs"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, iov = %s, nr_segs = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, iov, nr_segs, flags,)) def handle_compat_syscall_entry_tee(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fdin = event["fdin"] fdout = event["fdout"] len = event["len"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fdin = %s, fdout = %s, len = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fdin, fdout, len, flags,)) def handle_compat_syscall_entry_splice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd_in = event["fd_in"] off_in = event["off_in"] fd_out = event["fd_out"] off_out = event["off_out"] len = event["len"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd_in = %s, off_in = %s, fd_out = %s, off_out = %s, len = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd_in, off_in, fd_out, off_out, len, flags,)) def handle_compat_syscall_entry_get_robust_list(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_compat_syscall_entry_set_robust_list(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] head = event["head"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { head = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, head, len,)) def handle_compat_syscall_entry_unshare(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] unshare_flags = event["unshare_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { unshare_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, unshare_flags,)) def handle_compat_syscall_entry_ppoll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufds = event["ufds"] nfds = event["nfds"] tsp = event["tsp"] sigmask = event["sigmask"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufds = %s, nfds = %s, tsp = %s, sigmask = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufds, nfds, tsp, sigmask, sigsetsize,)) def handle_compat_syscall_entry_pselect6(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] n = event["n"] inp = event["inp"] outp = event["outp"] exp = event["exp"] tsp = event["tsp"] sig = event["sig"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { n = %s, inp = %s, outp = %s, exp = %s, tsp = %s, sig = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, n, inp, outp, exp, tsp, sig,)) def handle_compat_syscall_entry_faccessat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, mode,)) def handle_compat_syscall_entry_fchmodat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, mode,)) def handle_compat_syscall_entry_readlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] pathname = event["pathname"] bufsiz = event["bufsiz"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, pathname = %s, bufsiz = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, pathname, bufsiz,)) def handle_compat_syscall_entry_symlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldname = event["oldname"] newdfd = event["newdfd"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldname = %s, newdfd = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldname, newdfd, newname,)) def handle_compat_syscall_entry_linkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] olddfd = event["olddfd"] oldname = event["oldname"] newdfd = event["newdfd"] newname = event["newname"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { olddfd = %s, oldname = %s, newdfd = %s, newname = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, olddfd, oldname, newdfd, newname, flags,)) def handle_compat_syscall_entry_renameat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] olddfd = event["olddfd"] oldname = event["oldname"] newdfd = event["newdfd"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { olddfd = %s, oldname = %s, newdfd = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, olddfd, oldname, newdfd, newname,)) def handle_compat_syscall_entry_unlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] pathname = event["pathname"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, pathname = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, pathname, flag,)) def handle_compat_syscall_entry_fstatat64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] statbuf = event["statbuf"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, statbuf = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, statbuf, flag,)) def handle_compat_syscall_entry_futimesat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] utimes = event["utimes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, utimes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, utimes,)) def handle_compat_syscall_entry_fchownat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] user = event["user"] group = event["group"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, user = %s, group = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, user, group, flag,)) def handle_compat_syscall_entry_mknodat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] mode = event["mode"] dev = event["dev"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, mode = %s, dev = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, mode, dev,)) def handle_compat_syscall_entry_mkdirat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] pathname = event["pathname"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, pathname = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, pathname, mode,)) def handle_compat_syscall_entry_openat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] flags = event["flags"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, flags = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, flags, mode,)) def handle_compat_syscall_entry_inotify_rm_watch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] wd = event["wd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, wd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, wd,)) def handle_compat_syscall_entry_inotify_add_watch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] pathname = event["pathname"] mask = event["mask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, pathname = %s, mask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, pathname, mask,)) def handle_compat_syscall_entry_inotify_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_ioprio_get(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] who = event["who"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, who = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, who,)) def handle_compat_syscall_entry_ioprio_set(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] who = event["who"] ioprio = event["ioprio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, who = %s, ioprio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, who, ioprio,)) def handle_compat_syscall_entry_keyctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] option = event["option"] arg2 = event["arg2"] arg3 = event["arg3"] arg4 = event["arg4"] arg5 = event["arg5"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { option = %s, arg2 = %s, arg3 = %s, arg4 = %s, arg5 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, option, arg2, arg3, arg4, arg5,)) def handle_compat_syscall_entry_request_key(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] _type = event["_type"] _description = event["_description"] _callout_info = event["_callout_info"] destringid = event["destringid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { _type = %s, _description = %s, _callout_info = %s, destringid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, _type, _description, _callout_info, destringid,)) def handle_compat_syscall_entry_add_key(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] _type = event["_type"] _description = event["_description"] _payload = event["_payload"] plen = event["plen"] ringid = event["ringid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { _type = %s, _description = %s, _payload = %s, plen = %s, ringid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, _type, _description, _payload, plen, ringid,)) def handle_compat_syscall_entry_waitid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] upid = event["upid"] options = event["options"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, upid = %s, options = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, upid, options,)) def handle_compat_syscall_entry_kexec_load(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] entry = event["entry"] nr_segments = event["nr_segments"] segments = event["segments"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { entry = %s, nr_segments = %s, segments = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, entry, nr_segments, segments, flags,)) def handle_compat_syscall_entry_mq_getsetattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mqdes = event["mqdes"] u_mqstat = event["u_mqstat"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mqdes = %s, u_mqstat = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mqdes, u_mqstat,)) def handle_compat_syscall_entry_mq_notify(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mqdes = event["mqdes"] u_notification = event["u_notification"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mqdes = %s, u_notification = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mqdes, u_notification,)) def handle_compat_syscall_entry_mq_timedreceive(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mqdes = event["mqdes"] msg_len = event["msg_len"] u_abs_timeout = event["u_abs_timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mqdes = %s, msg_len = %s, u_abs_timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mqdes, msg_len, u_abs_timeout,)) def handle_compat_syscall_entry_mq_timedsend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mqdes = event["mqdes"] u_msg_ptr = event["u_msg_ptr"] msg_len = event["msg_len"] msg_prio = event["msg_prio"] u_abs_timeout = event["u_abs_timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mqdes = %s, u_msg_ptr = %s, msg_len = %s, msg_prio = %s, u_abs_timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mqdes, u_msg_ptr, msg_len, msg_prio, u_abs_timeout,)) def handle_compat_syscall_entry_mq_unlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] u_name = event["u_name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { u_name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, u_name,)) def handle_compat_syscall_entry_mq_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] u_name = event["u_name"] oflag = event["oflag"] mode = event["mode"] u_attr = event["u_attr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { u_name = %s, oflag = %s, mode = %s, u_attr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, u_name, oflag, mode, u_attr,)) def handle_compat_syscall_entry_utimes(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] utimes = event["utimes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, utimes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, utimes,)) def handle_compat_syscall_entry_tgkill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tgid = event["tgid"] pid = event["pid"] sig = event["sig"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tgid = %s, pid = %s, sig = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tgid, pid, sig,)) def handle_compat_syscall_entry_fstatfs64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] sz = event["sz"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, sz = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, sz, buf,)) def handle_compat_syscall_entry_statfs64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] sz = event["sz"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, sz = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, sz, buf,)) def handle_compat_syscall_entry_clock_nanosleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] flags = event["flags"] rqtp = event["rqtp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s, flags = %s, rqtp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock, flags, rqtp,)) def handle_compat_syscall_entry_clock_getres(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock,)) def handle_compat_syscall_entry_clock_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock,)) def handle_compat_syscall_entry_clock_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] tp = event["tp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s, tp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock, tp,)) def handle_compat_syscall_entry_timer_delete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer_id = event["timer_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer_id,)) def handle_compat_syscall_entry_timer_getoverrun(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer_id = event["timer_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer_id,)) def handle_compat_syscall_entry_timer_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer_id = event["timer_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer_id,)) def handle_compat_syscall_entry_timer_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer_id = event["timer_id"] flags = event["flags"] new_setting = event["new_setting"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer_id = %s, flags = %s, new_setting = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer_id, flags, new_setting,)) def handle_compat_syscall_entry_timer_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] timer_event_spec = event["timer_event_spec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s, timer_event_spec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock, timer_event_spec,)) def handle_compat_syscall_entry_set_tid_address(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tidptr = event["tidptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tidptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tidptr,)) def handle_compat_syscall_entry_remap_file_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] size = event["size"] prot = event["prot"] pgoff = event["pgoff"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, size = %s, prot = %s, pgoff = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, size, prot, pgoff, flags,)) def handle_compat_syscall_entry_epoll_wait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] epfd = event["epfd"] maxevents = event["maxevents"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { epfd = %s, maxevents = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, epfd, maxevents, timeout,)) def handle_compat_syscall_entry_epoll_ctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] epfd = event["epfd"] op = event["op"] fd = event["fd"] _event = event["event"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { epfd = %s, op = %s, fd = %s, event = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, epfd, op, fd, _event,)) def handle_compat_syscall_entry_epoll_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, size,)) def handle_compat_syscall_entry_exit_group(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] error_code = event["error_code"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { error_code = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, error_code,)) def handle_compat_syscall_entry_io_cancel(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ctx_id = event["ctx_id"] iocb = event["iocb"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ctx_id = %s, iocb = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ctx_id, iocb,)) def handle_compat_syscall_entry_io_submit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ctx_id = event["ctx_id"] nr = event["nr"] iocbpp = event["iocbpp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ctx_id = %s, nr = %s, iocbpp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ctx_id, nr, iocbpp,)) def handle_compat_syscall_entry_io_getevents(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ctx_id = event["ctx_id"] min_nr = event["min_nr"] nr = event["nr"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ctx_id = %s, min_nr = %s, nr = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ctx_id, min_nr, nr, timeout,)) def handle_compat_syscall_entry_io_destroy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ctx = event["ctx"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ctx = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ctx,)) def handle_compat_syscall_entry_io_setup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_events = event["nr_events"] ctxp = event["ctxp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_events = %s, ctxp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_events, ctxp,)) def handle_compat_syscall_entry_sched_getaffinity(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, len,)) def handle_compat_syscall_entry_sched_setaffinity(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] len = event["len"] user_mask_ptr = event["user_mask_ptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, len = %s, user_mask_ptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, len, user_mask_ptr,)) def handle_compat_syscall_entry_futex(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uaddr = event["uaddr"] op = event["op"] val = event["val"] utime = event["utime"] uaddr2 = event["uaddr2"] val3 = event["val3"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uaddr = %s, op = %s, val = %s, utime = %s, uaddr2 = %s, val3 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uaddr, op, val, utime, uaddr2, val3,)) def handle_compat_syscall_entry_sendfile64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] out_fd = event["out_fd"] in_fd = event["in_fd"] offset = event["offset"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { out_fd = %s, in_fd = %s, offset = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, out_fd, in_fd, offset, count,)) def handle_compat_syscall_entry_tkill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] sig = event["sig"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, sig = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, sig,)) def handle_compat_syscall_entry_fremovexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, name,)) def handle_compat_syscall_entry_lremovexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name,)) def handle_compat_syscall_entry_removexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name,)) def handle_compat_syscall_entry_flistxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, size,)) def handle_compat_syscall_entry_llistxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, size,)) def handle_compat_syscall_entry_listxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, size,)) def handle_compat_syscall_entry_fgetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] name = event["name"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, name = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, name, size,)) def handle_compat_syscall_entry_lgetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name, size,)) def handle_compat_syscall_entry_getxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name, size,)) def handle_compat_syscall_entry_fsetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] name = event["name"] value = event["value"] size = event["size"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, name = %s, value = %s, size = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, name, value, size, flags,)) def handle_compat_syscall_entry_lsetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] value = event["value"] size = event["size"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s, value = %s, size = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name, value, size, flags,)) def handle_compat_syscall_entry_setxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] value = event["value"] size = event["size"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s, value = %s, size = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name, value, size, flags,)) def handle_compat_syscall_entry_gettid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_fcntl64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, cmd, arg,)) def handle_compat_syscall_entry_getdents64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, count,)) def handle_compat_syscall_entry_madvise(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len_in = event["len_in"] behavior = event["behavior"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len_in = %s, behavior = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len_in, behavior,)) def handle_compat_syscall_entry_mincore(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len,)) def handle_compat_syscall_entry_pivot_root(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] new_root = event["new_root"] put_old = event["put_old"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { new_root = %s, put_old = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, new_root, put_old,)) def handle_compat_syscall_entry_setfsgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gid = event["gid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gid,)) def handle_compat_syscall_entry_setfsuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uid = event["uid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uid,)) def handle_compat_syscall_entry_setgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gid = event["gid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gid,)) def handle_compat_syscall_entry_setuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uid = event["uid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uid,)) def handle_compat_syscall_entry_chown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, user, group,)) def handle_compat_syscall_entry_getresgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_setresgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rgid = event["rgid"] egid = event["egid"] sgid = event["sgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rgid = %s, egid = %s, sgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rgid, egid, sgid,)) def handle_compat_syscall_entry_getresuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_setresuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ruid = event["ruid"] euid = event["euid"] suid = event["suid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ruid = %s, euid = %s, suid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ruid, euid, suid,)) def handle_compat_syscall_entry_fchown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, user, group,)) def handle_compat_syscall_entry_setgroups(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gidsetsize = event["gidsetsize"] grouplist = event["grouplist"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gidsetsize = %s, grouplist = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gidsetsize, grouplist,)) def handle_compat_syscall_entry_getgroups(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gidsetsize = event["gidsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gidsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gidsetsize,)) def handle_compat_syscall_entry_setregid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rgid = event["rgid"] egid = event["egid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rgid = %s, egid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rgid, egid,)) def handle_compat_syscall_entry_setreuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ruid = event["ruid"] euid = event["euid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ruid = %s, euid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ruid, euid,)) def handle_compat_syscall_entry_getegid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_geteuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_getgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_getuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_lchown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, user, group,)) def handle_compat_syscall_entry_fstat64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, statbuf,)) def handle_compat_syscall_entry_lstat64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, statbuf,)) def handle_compat_syscall_entry_stat64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, statbuf,)) def handle_compat_syscall_entry_mmap_pgoff(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] addr = event["addr"] len = event["len"] prot = event["prot"] flags = event["flags"] fd = event["fd"] pgoff = event["pgoff"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { addr = %s, len = %s, prot = %s, flags = %s, fd = %s, pgoff = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, addr, len, prot, flags, fd, pgoff,)) def handle_compat_syscall_entry_getrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] resource = event["resource"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { resource = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, resource,)) def handle_compat_syscall_entry_sendfile(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] out_fd = event["out_fd"] in_fd = event["in_fd"] offset = event["offset"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { out_fd = %s, in_fd = %s, offset = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, out_fd, in_fd, offset, count,)) def handle_compat_syscall_entry_getcwd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, size,)) def handle_compat_syscall_entry_chown16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, user, group,)) def handle_compat_syscall_entry_rt_sigsuspend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] unewset = event["unewset"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { unewset = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, unewset, sigsetsize,)) def handle_compat_syscall_entry_rt_sigqueueinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] sig = event["sig"] uinfo = event["uinfo"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, sig = %s, uinfo = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, sig, uinfo,)) def handle_compat_syscall_entry_rt_sigtimedwait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uts = event["uts"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uts = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uts, sigsetsize,)) def handle_compat_syscall_entry_rt_sigpending(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, sigsetsize,)) def handle_compat_syscall_entry_rt_sigprocmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] how = event["how"] nset = event["nset"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { how = %s, nset = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, how, nset, sigsetsize,)) def handle_compat_syscall_entry_rt_sigaction(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] sig = event["sig"] act = event["act"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { sig = %s, act = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, sig, act, sigsetsize,)) def handle_compat_syscall_entry_prctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] option = event["option"] arg2 = event["arg2"] arg3 = event["arg3"] arg4 = event["arg4"] arg5 = event["arg5"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { option = %s, arg2 = %s, arg3 = %s, arg4 = %s, arg5 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, option, arg2, arg3, arg4, arg5,)) def handle_compat_syscall_entry_getresgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rgid = event["rgid"] egid = event["egid"] sgid = event["sgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rgid = %s, egid = %s, sgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rgid, egid, sgid,)) def handle_compat_syscall_entry_setresgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rgid = event["rgid"] egid = event["egid"] sgid = event["sgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rgid = %s, egid = %s, sgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rgid, egid, sgid,)) def handle_compat_syscall_entry_poll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufds = event["ufds"] nfds = event["nfds"] timeout_msecs = event["timeout_msecs"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufds = %s, nfds = %s, timeout_msecs = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufds, nfds, timeout_msecs,)) def handle_compat_syscall_entry_getresuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ruid = event["ruid"] euid = event["euid"] suid = event["suid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ruid = %s, euid = %s, suid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ruid, euid, suid,)) def handle_compat_syscall_entry_setresuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ruid = event["ruid"] euid = event["euid"] suid = event["suid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ruid = %s, euid = %s, suid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ruid, euid, suid,)) def handle_compat_syscall_entry_mremap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] addr = event["addr"] old_len = event["old_len"] new_len = event["new_len"] flags = event["flags"] new_addr = event["new_addr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { addr = %s, old_len = %s, new_len = %s, flags = %s, new_addr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, addr, old_len, new_len, flags, new_addr,)) def handle_compat_syscall_entry_nanosleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rqtp = event["rqtp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rqtp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rqtp,)) def handle_compat_syscall_entry_sched_rr_get_interval(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_compat_syscall_entry_sched_get_priority_min(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] policy = event["policy"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { policy = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, policy,)) def handle_compat_syscall_entry_sched_get_priority_max(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] policy = event["policy"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { policy = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, policy,)) def handle_compat_syscall_entry_sched_yield(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_sched_getscheduler(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_compat_syscall_entry_sched_setscheduler(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] policy = event["policy"] param = event["param"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, policy = %s, param = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, policy, param,)) def handle_compat_syscall_entry_sched_getparam(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_compat_syscall_entry_sched_setparam(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] param = event["param"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, param = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, param,)) def handle_compat_syscall_entry_munlockall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_mlockall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags,)) def handle_compat_syscall_entry_munlock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len,)) def handle_compat_syscall_entry_mlock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len,)) def handle_compat_syscall_entry_sysctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, args,)) def handle_compat_syscall_entry_fdatasync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_compat_syscall_entry_getsid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_compat_syscall_entry_writev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vec = event["vec"] vlen = event["vlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vec = %s, vlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vec, vlen,)) def handle_compat_syscall_entry_readv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vec = event["vec"] vlen = event["vlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vec = %s, vlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vec, vlen,)) def handle_compat_syscall_entry_msync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len, flags,)) def handle_compat_syscall_entry_flock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] cmd = event["cmd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, cmd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, cmd,)) def handle_compat_syscall_entry_select(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] n = event["n"] inp = event["inp"] outp = event["outp"] exp = event["exp"] tvp = event["tvp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { n = %s, inp = %s, outp = %s, exp = %s, tvp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, n, inp, outp, exp, tvp,)) def handle_compat_syscall_entry_getdents(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, count,)) def handle_compat_syscall_entry_llseek(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] offset_high = event["offset_high"] offset_low = event["offset_low"] result = event["result"] origin = event["origin"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, offset_high = %s, offset_low = %s, result = %s, origin = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, offset_high, offset_low, result, origin,)) def handle_compat_syscall_entry_setfsgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gid = event["gid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gid,)) def handle_compat_syscall_entry_setfsuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uid = event["uid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uid,)) def handle_compat_syscall_entry_personality(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] personality = event["personality"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { personality = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, personality,)) def handle_compat_syscall_entry_sysfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] option = event["option"] arg1 = event["arg1"] arg2 = event["arg2"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { option = %s, arg1 = %s, arg2 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, option, arg1, arg2,)) def handle_compat_syscall_entry_bdflush(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] func = event["func"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { func = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, func, data,)) def handle_compat_syscall_entry_fchdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_compat_syscall_entry_getpgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_compat_syscall_entry_quotactl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] cmd = event["cmd"] special = event["special"] id = event["id"] addr = event["addr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { cmd = %s, special = %s, id = %s, addr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, cmd, special, id, addr,)) def handle_compat_syscall_entry_delete_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name_user = event["name_user"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name_user = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name_user, flags,)) def handle_compat_syscall_entry_init_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] umod = event["umod"] len = event["len"] uargs = event["uargs"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { umod = %s, len = %s, uargs = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, umod, len, uargs,)) def handle_compat_syscall_entry_sigprocmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] how = event["how"] nset = event["nset"] oset = event["oset"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { how = %s, nset = %s, oset = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, how, nset, oset,)) def handle_compat_syscall_entry_mprotect(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] prot = event["prot"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s, prot = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len, prot,)) def handle_compat_syscall_entry_adjtimex(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] txc_p = event["txc_p"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { txc_p = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, txc_p,)) def handle_compat_syscall_entry_newuname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_setdomainname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, len,)) def handle_compat_syscall_entry_clone(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clone_flags = event["clone_flags"] newsp = event["newsp"] parent_tid = event["parent_tid"] child_tid = event["child_tid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clone_flags = %s, newsp = %s, parent_tid = %s, child_tid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clone_flags, newsp, parent_tid, child_tid,)) def handle_compat_syscall_entry_fsync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_compat_syscall_entry_ipc(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] call = event["call"] first = event["first"] second = event["second"] third = event["third"] ptr = event["ptr"] fifth = event["fifth"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { call = %s, first = %s, second = %s, third = %s, ptr = %s, fifth = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, call, first, second, third, ptr, fifth,)) def handle_compat_syscall_entry_sysinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_swapoff(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] specialfile = event["specialfile"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { specialfile = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, specialfile,)) def handle_compat_syscall_entry_wait4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] upid = event["upid"] options = event["options"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { upid = %s, options = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, upid, options,)) def handle_compat_syscall_entry_vhangup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_uname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_compat_syscall_entry_newfstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_compat_syscall_entry_newlstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename,)) def handle_compat_syscall_entry_newstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename,)) def handle_compat_syscall_entry_getitimer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which,)) def handle_compat_syscall_entry_setitimer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, value,)) def handle_compat_syscall_entry_syslog(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] type = event["type"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { type = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, type, len,)) def handle_compat_syscall_entry_socketcall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] call = event["call"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { call = %s, args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, call, args,)) def handle_compat_syscall_entry_fstatfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_compat_syscall_entry_statfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname,)) def handle_compat_syscall_entry_setpriority(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] who = event["who"] niceval = event["niceval"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, who = %s, niceval = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, who, niceval,)) def handle_compat_syscall_entry_getpriority(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] who = event["who"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, who = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, who,)) def handle_compat_syscall_entry_fchown16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, user, group,)) def handle_compat_syscall_entry_fchmod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, mode,)) def handle_compat_syscall_entry_ftruncate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] length = event["length"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, length = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, length,)) def handle_compat_syscall_entry_truncate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] path = event["path"] length = event["length"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { path = %s, length = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, path, length,)) def handle_compat_syscall_entry_munmap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] addr = event["addr"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { addr = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, addr, len,)) def handle_compat_syscall_entry_old_mmap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, arg,)) def handle_compat_syscall_entry_old_readdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] dirent = event["dirent"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, dirent = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, dirent, count,)) def handle_compat_syscall_entry_reboot(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] magic1 = event["magic1"] magic2 = event["magic2"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { magic1 = %s, magic2 = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, magic1, magic2, cmd, arg,)) def handle_compat_syscall_entry_swapon(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] specialfile = event["specialfile"] swap_flags = event["swap_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { specialfile = %s, swap_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, specialfile, swap_flags,)) def handle_compat_syscall_entry_uselib(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] library = event["library"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { library = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, library,)) def handle_compat_syscall_entry_readlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] path = event["path"] bufsiz = event["bufsiz"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { path = %s, bufsiz = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, path, bufsiz,)) def handle_compat_syscall_entry_lstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, statbuf,)) def handle_compat_syscall_entry_symlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldname = event["oldname"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldname = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldname, newname,)) def handle_compat_syscall_entry_old_select(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, arg,)) def handle_compat_syscall_entry_setgroups16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gidsetsize = event["gidsetsize"] grouplist = event["grouplist"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gidsetsize = %s, grouplist = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gidsetsize, grouplist,)) def handle_compat_syscall_entry_getgroups16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gidsetsize = event["gidsetsize"] grouplist = event["grouplist"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gidsetsize = %s, grouplist = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gidsetsize, grouplist,)) def handle_compat_syscall_entry_settimeofday(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tv = event["tv"] tz = event["tz"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tv = %s, tz = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tv, tz,)) def handle_compat_syscall_entry_gettimeofday(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_getrusage(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] who = event["who"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { who = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, who,)) def handle_compat_syscall_entry_old_getrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] resource = event["resource"] rlim = event["rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { resource = %s, rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, resource, rlim,)) def handle_compat_syscall_entry_setrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] resource = event["resource"] rlim = event["rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { resource = %s, rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, resource, rlim,)) def handle_compat_syscall_entry_sethostname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, len,)) def handle_compat_syscall_entry_sigpending(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] set = event["set"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { set = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, set,)) def handle_compat_syscall_entry_setregid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rgid = event["rgid"] egid = event["egid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rgid = %s, egid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rgid, egid,)) def handle_compat_syscall_entry_setreuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ruid = event["ruid"] euid = event["euid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ruid = %s, euid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ruid, euid,)) def handle_compat_syscall_entry_ssetmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] newmask = event["newmask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { newmask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, newmask,)) def handle_compat_syscall_entry_sgetmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_setsid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_getpgrp(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_getppid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_dup2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldfd = event["oldfd"] newfd = event["newfd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldfd = %s, newfd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldfd, newfd,)) def handle_compat_syscall_entry_ustat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev,)) def handle_compat_syscall_entry_chroot(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename,)) def handle_compat_syscall_entry_umask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mask = event["mask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mask,)) def handle_compat_syscall_entry_olduname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_compat_syscall_entry_setpgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] pgid = event["pgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, pgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, pgid,)) def handle_compat_syscall_entry_fcntl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, cmd, arg,)) def handle_compat_syscall_entry_ioctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, cmd, arg,)) def handle_compat_syscall_entry_umount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, flags,)) def handle_compat_syscall_entry_acct(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_compat_syscall_entry_getegid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_geteuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_signal(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] sig = event["sig"] handler = event["handler"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { sig = %s, handler = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, sig, handler,)) def handle_compat_syscall_entry_getgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_setgid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gid = event["gid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gid,)) def handle_compat_syscall_entry_brk(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] brk = event["brk"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { brk = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, brk,)) def handle_compat_syscall_entry_times(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_pipe(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_dup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fildes = event["fildes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fildes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fildes,)) def handle_compat_syscall_entry_rmdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname,)) def handle_compat_syscall_entry_mkdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, mode,)) def handle_compat_syscall_entry_rename(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldname = event["oldname"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldname = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldname, newname,)) def handle_compat_syscall_entry_kill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] sig = event["sig"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, sig = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, sig,)) def handle_compat_syscall_entry_sync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_nice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] increment = event["increment"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { increment = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, increment,)) def handle_compat_syscall_entry_access(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, mode,)) def handle_compat_syscall_entry_utime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] times = event["times"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, times = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, times,)) def handle_compat_syscall_entry_pause(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_fstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, statbuf,)) def handle_compat_syscall_entry_alarm(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] seconds = event["seconds"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { seconds = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, seconds,)) def handle_compat_syscall_entry_ptrace(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] request = event["request"] pid = event["pid"] addr = event["addr"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { request = %s, pid = %s, addr = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, request, pid, addr, data,)) def handle_compat_syscall_entry_stime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tptr = event["tptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tptr,)) def handle_compat_syscall_entry_getuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_setuid16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uid = event["uid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uid,)) def handle_compat_syscall_entry_oldumount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_compat_syscall_entry_mount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev_name = event["dev_name"] dir_name = event["dir_name"] type = event["type"] flags = event["flags"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev_name = %s, dir_name = %s, type = %s, flags = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev_name, dir_name, type, flags, data,)) def handle_compat_syscall_entry_getpid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_lseek(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] offset = event["offset"] origin = event["origin"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, offset = %s, origin = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, offset, origin,)) def handle_compat_syscall_entry_stat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, statbuf,)) def handle_compat_syscall_entry_lchown16(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, user, group,)) def handle_compat_syscall_entry_chmod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, mode,)) def handle_compat_syscall_entry_mknod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] mode = event["mode"] dev = event["dev"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, mode = %s, dev = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, mode, dev,)) def handle_compat_syscall_entry_time(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_compat_syscall_entry_chdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename,)) def handle_compat_syscall_entry_execve(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] argv = event["argv"] envp = event["envp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, argv = %s, envp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, argv, envp,)) def handle_compat_syscall_entry_unlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname,)) def handle_compat_syscall_entry_link(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldname = event["oldname"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldname = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldname, newname,)) def handle_compat_syscall_entry_creat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, mode,)) def handle_compat_syscall_entry_waitpid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] stat_addr = event["stat_addr"] options = event["options"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, stat_addr = %s, options = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, stat_addr, options,)) def handle_compat_syscall_entry_close(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_compat_syscall_entry_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] flags = event["flags"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, flags = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, flags, mode,)) def handle_compat_syscall_entry_write(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] buf = event["buf"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, buf = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, buf, count,)) def handle_compat_syscall_entry_read(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, count,)) def handle_compat_syscall_entry_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] error_code = event["error_code"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { error_code = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, error_code,)) def handle_compat_syscall_entry_restart_syscall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_exit_finit_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_process_vm_writev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_process_vm_readv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] lvec = event["lvec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, lvec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, lvec,)) def handle_syscall_exit_getcpu(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] cpup = event["cpup"] nodep = event["nodep"] tcache = event["tcache"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, cpup = %s, nodep = %s, tcache = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, cpup, nodep, tcache,)) def handle_syscall_exit_setns(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sendmmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_syncfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_clock_adjtime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] utx = event["utx"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, utx = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, utx,)) def handle_syscall_exit_open_by_handle_at(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_name_to_handle_at(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] handle = event["handle"] mnt_id = event["mnt_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, handle = %s, mnt_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, handle, mnt_id,)) def handle_syscall_exit_prlimit64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] old_rlim = event["old_rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, old_rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, old_rlim,)) def handle_syscall_exit_fanotify_mark(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fanotify_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_recvmmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] mmsg = event["mmsg"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, mmsg = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, mmsg, timeout,)) def handle_syscall_exit_perf_event_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_rt_tgsigqueueinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_pwritev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_preadv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, vec,)) def handle_syscall_exit_inotify_init1(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_pipe2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fildes = event["fildes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fildes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fildes,)) def handle_syscall_exit_dup3(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_epoll_create1(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_eventfd2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_signalfd4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_accept4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] upeer_sockaddr = event["upeer_sockaddr"] upeer_addrlen = event["upeer_addrlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, upeer_sockaddr = %s, upeer_addrlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, upeer_sockaddr, upeer_addrlen,)) def handle_syscall_exit_timerfd_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] otmr = event["otmr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, otmr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, otmr,)) def handle_syscall_exit_timerfd_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] otmr = event["otmr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, otmr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, otmr,)) def handle_syscall_exit_fallocate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_eventfd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_timerfd_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_signalfd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_epoll_pwait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] events = event["events"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, events = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, events,)) def handle_syscall_exit_utimensat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_move_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] status = event["status"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, status = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, status,)) def handle_syscall_exit_vmsplice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sync_file_range(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_tee(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_splice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_get_robust_list(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] head_ptr = event["head_ptr"] len_ptr = event["len_ptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, head_ptr = %s, len_ptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, head_ptr, len_ptr,)) def handle_syscall_exit_set_robust_list(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_unshare(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_ppoll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ufds = event["ufds"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ufds = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ufds,)) def handle_syscall_exit_pselect6(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] inp = event["inp"] outp = event["outp"] exp = event["exp"] tsp = event["tsp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, inp = %s, outp = %s, exp = %s, tsp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, inp, outp, exp, tsp,)) def handle_syscall_exit_faccessat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fchmodat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_readlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_symlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_linkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_renameat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_unlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_newfstatat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, statbuf,)) def handle_syscall_exit_futimesat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fchownat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mknodat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mkdirat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_openat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_migrate_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_inotify_rm_watch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_inotify_add_watch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_inotify_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_ioprio_get(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_ioprio_set(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_keyctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg2 = event["arg2"] arg3 = event["arg3"] arg4 = event["arg4"] arg5 = event["arg5"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg2 = %s, arg3 = %s, arg4 = %s, arg5 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg2, arg3, arg4, arg5,)) def handle_syscall_exit_request_key(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_add_key(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_waitid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] infop = event["infop"] ru = event["ru"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, infop = %s, ru = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, infop, ru,)) def handle_syscall_exit_kexec_load(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mq_getsetattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] u_omqstat = event["u_omqstat"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, u_omqstat = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, u_omqstat,)) def handle_syscall_exit_mq_notify(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mq_timedreceive(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] u_msg_ptr = event["u_msg_ptr"] u_msg_prio = event["u_msg_prio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, u_msg_ptr = %s, u_msg_prio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, u_msg_ptr, u_msg_prio,)) def handle_syscall_exit_mq_timedsend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mq_unlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mq_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_get_mempolicy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] policy = event["policy"] nmask = event["nmask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, policy = %s, nmask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, policy, nmask,)) def handle_syscall_exit_set_mempolicy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mbind(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_utimes(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_tgkill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_epoll_ctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_epoll_wait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] events = event["events"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, events = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, events,)) def handle_syscall_exit_exit_group(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_clock_nanosleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rmtp = event["rmtp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rmtp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rmtp,)) def handle_syscall_exit_clock_getres(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tp = event["tp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tp,)) def handle_syscall_exit_clock_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tp = event["tp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tp,)) def handle_syscall_exit_clock_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_timer_delete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_timer_getoverrun(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_timer_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] setting = event["setting"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, setting = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, setting,)) def handle_syscall_exit_timer_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] old_setting = event["old_setting"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, old_setting = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, old_setting,)) def handle_syscall_exit_timer_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] created_timer_id = event["created_timer_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, created_timer_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, created_timer_id,)) def handle_syscall_exit_fadvise64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_semtimedop(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, timeout,)) def handle_syscall_exit_restart_syscall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_set_tid_address(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getdents64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] dirent = event["dirent"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, dirent = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, dirent,)) def handle_syscall_exit_remap_file_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_epoll_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_lookup_dcookie(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_io_cancel(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] result = event["result"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, result = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, result,)) def handle_syscall_exit_io_submit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_io_getevents(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] events = event["events"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, events = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, events, timeout,)) def handle_syscall_exit_io_destroy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_io_setup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sched_getaffinity(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] user_mask_ptr = event["user_mask_ptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, user_mask_ptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, user_mask_ptr,)) def handle_syscall_exit_sched_setaffinity(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_futex(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] uaddr = event["uaddr"] uaddr2 = event["uaddr2"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, uaddr = %s, uaddr2 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, uaddr, uaddr2,)) def handle_syscall_exit_time(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tloc = event["tloc"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tloc = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tloc,)) def handle_syscall_exit_tkill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fremovexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_lremovexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_removexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_flistxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] list = event["list"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, list = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, list,)) def handle_syscall_exit_llistxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] list = event["list"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, list = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, list,)) def handle_syscall_exit_listxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] list = event["list"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, list = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, list,)) def handle_syscall_exit_fgetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, value,)) def handle_syscall_exit_lgetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, value,)) def handle_syscall_exit_getxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, value,)) def handle_syscall_exit_fsetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_lsetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_readahead(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_gettid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_quotactl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] addr = event["addr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, addr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, addr,)) def handle_syscall_exit_delete_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_init_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setdomainname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sethostname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_reboot(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_swapoff(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_swapon(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_umount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_settimeofday(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_acct(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_chroot(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_adjtimex(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] txc_p = event["txc_p"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, txc_p = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, txc_p,)) def handle_syscall_exit_prctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg2 = event["arg2"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg2 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg2,)) def handle_syscall_exit_sysctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, args,)) def handle_syscall_exit_pivot_root(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_vhangup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_munlockall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mlockall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_munlock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mlock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sched_rr_get_interval(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] interval = event["interval"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, interval = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, interval,)) def handle_syscall_exit_sched_get_priority_min(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sched_get_priority_max(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sched_getscheduler(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sched_setscheduler(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sched_getparam(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] param = event["param"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, param = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, param,)) def handle_syscall_exit_sched_setparam(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setpriority(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getpriority(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sysfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fstatfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_statfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_ustat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ubuf = event["ubuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ubuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ubuf,)) def handle_syscall_exit_personality(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mknod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_utime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sigaltstack(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] uoss = event["uoss"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, uoss = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, uoss,)) def handle_syscall_exit_rt_sigsuspend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_rt_sigqueueinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_rt_sigtimedwait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] uthese = event["uthese"] uinfo = event["uinfo"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, uthese = %s, uinfo = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, uthese, uinfo,)) def handle_syscall_exit_rt_sigpending(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] uset = event["uset"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, uset = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, uset,)) def handle_syscall_exit_getsid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setfsgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setfsuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getpgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getresgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rgidp = event["rgidp"] egidp = event["egidp"] sgidp = event["sgidp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rgidp = %s, egidp = %s, sgidp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rgidp, egidp, sgidp,)) def handle_syscall_exit_setresgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getresuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ruidp = event["ruidp"] euidp = event["euidp"] suidp = event["suidp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ruidp = %s, euidp = %s, suidp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ruidp, euidp, suidp,)) def handle_syscall_exit_setresuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setgroups(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getgroups(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] grouplist = event["grouplist"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, grouplist = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, grouplist,)) def handle_syscall_exit_setregid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setreuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setsid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getpgrp(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getppid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setpgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getegid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_geteuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_syslog(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_getuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_ptrace(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] addr = event["addr"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, addr = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, addr, data,)) def handle_syscall_exit_times(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tbuf = event["tbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tbuf,)) def handle_syscall_exit_sysinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] info = event["info"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, info = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, info,)) def handle_syscall_exit_getrusage(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ru = event["ru"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ru = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ru,)) def handle_syscall_exit_getrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rlim = event["rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rlim,)) def handle_syscall_exit_gettimeofday(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] tv = event["tv"] tz = event["tz"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, tv = %s, tz = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, tv, tz,)) def handle_syscall_exit_umask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_lchown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fchown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_chown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fchmod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_chmod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_readlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_symlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_unlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_link(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_creat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_rmdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mkdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_rename(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fchdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_chdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getcwd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_getdents(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] dirent = event["dirent"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, dirent = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, dirent,)) def handle_syscall_exit_ftruncate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_truncate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fdatasync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fsync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_flock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_fcntl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg,)) def handle_syscall_exit_msgctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_msgrcv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] msgp = event["msgp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, msgp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, msgp,)) def handle_syscall_exit_msgsnd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_msgget(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_shmdt(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_semctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg,)) def handle_syscall_exit_semop(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_semget(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_newuname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, name,)) def handle_syscall_exit_kill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_wait4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] stat_addr = event["stat_addr"] ru = event["ru"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, stat_addr = %s, ru = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, stat_addr, ru,)) def handle_syscall_exit_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_execve(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_clone(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getsockopt(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] optval = event["optval"] optlen = event["optlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, optval = %s, optlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, optval, optlen,)) def handle_syscall_exit_setsockopt(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_socketpair(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] usockvec = event["usockvec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, usockvec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, usockvec,)) def handle_syscall_exit_getpeername(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] usockaddr = event["usockaddr"] usockaddr_len = event["usockaddr_len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, usockaddr = %s, usockaddr_len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, usockaddr, usockaddr_len,)) def handle_syscall_exit_getsockname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] usockaddr = event["usockaddr"] usockaddr_len = event["usockaddr_len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, usockaddr = %s, usockaddr_len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, usockaddr, usockaddr_len,)) def handle_syscall_exit_listen(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_bind(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_shutdown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_recvmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] msg = event["msg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, msg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, msg,)) def handle_syscall_exit_sendmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_recvfrom(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ubuf = event["ubuf"] addr = event["addr"] addr_len = event["addr_len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ubuf = %s, addr = %s, addr_len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ubuf, addr, addr_len,)) def handle_syscall_exit_sendto(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_accept(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] upeer_addrlen = event["upeer_addrlen"] family = event["family"] sport = event["sport"] _v4addr_length = event["_v4addr_length"] v4addr = event["v4addr"] _v6addr_length = event["_v6addr_length"] v6addr = event["v6addr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, upeer_addrlen = %s, family = %s, sport = %s, _v4addr_length = %s, v4addr = %s, _v6addr_length = %s, v6addr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, upeer_addrlen, family, sport, _v4addr_length, v4addr, _v6addr_length, v6addr,)) def handle_syscall_exit_connect(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_socket(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sendfile64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] offset = event["offset"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, offset = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, offset,)) def handle_syscall_exit_getpid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_setitimer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ovalue = event["ovalue"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ovalue = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ovalue,)) def handle_syscall_exit_alarm(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_getitimer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, value,)) def handle_syscall_exit_nanosleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] rmtp = event["rmtp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, rmtp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, rmtp,)) def handle_syscall_exit_pause(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_dup2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_dup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_shmctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_shmat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_shmget(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_madvise(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mincore(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, vec,)) def handle_syscall_exit_msync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mremap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_sched_yield(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_select(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] inp = event["inp"] outp = event["outp"] exp = event["exp"] tvp = event["tvp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, inp = %s, outp = %s, exp = %s, tvp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, inp, outp, exp, tvp,)) def handle_syscall_exit_pipe(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] fildes = event["fildes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, fildes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, fildes,)) def handle_syscall_exit_access(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_writev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, vec,)) def handle_syscall_exit_readv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, vec,)) def handle_syscall_exit_pwrite64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_pread64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_exit_ioctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, arg,)) def handle_syscall_exit_rt_sigprocmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] oset = event["oset"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, oset = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, oset,)) def handle_syscall_exit_rt_sigaction(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] oact = event["oact"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, oact = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, oact,)) def handle_syscall_exit_brk(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_munmap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mprotect(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_mmap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_lseek(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_poll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] ufds = event["ufds"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, ufds = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, ufds,)) def handle_syscall_exit_newlstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, statbuf,)) def handle_syscall_exit_newfstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, statbuf,)) def handle_syscall_exit_newstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] statbuf = event["statbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, statbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, statbuf,)) def handle_syscall_exit_close(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_write(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_syscall_exit_read(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret, buf,)) def handle_syscall_entry_finit_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] uargs = event["uargs"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, uargs = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, uargs, flags,)) def handle_syscall_entry_process_vm_writev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] lvec = event["lvec"] liovcnt = event["liovcnt"] rvec = event["rvec"] riovcnt = event["riovcnt"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, lvec = %s, liovcnt = %s, rvec = %s, riovcnt = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, lvec, liovcnt, rvec, riovcnt, flags,)) def handle_syscall_entry_process_vm_readv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] liovcnt = event["liovcnt"] rvec = event["rvec"] riovcnt = event["riovcnt"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, liovcnt = %s, rvec = %s, riovcnt = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, liovcnt, rvec, riovcnt, flags,)) def handle_syscall_entry_getcpu(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tcache = event["tcache"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tcache = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tcache,)) def handle_syscall_entry_setns(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] nstype = event["nstype"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, nstype = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, nstype,)) def handle_syscall_entry_sendmmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] mmsg = event["mmsg"] vlen = event["vlen"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, mmsg = %s, vlen = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, mmsg, vlen, flags,)) def handle_syscall_entry_syncfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_syscall_entry_clock_adjtime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] utx = event["utx"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s, utx = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock, utx,)) def handle_syscall_entry_open_by_handle_at(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mountdirfd = event["mountdirfd"] handle = event["handle"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mountdirfd = %s, handle = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mountdirfd, handle, flags,)) def handle_syscall_entry_name_to_handle_at(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] name = event["name"] handle = event["handle"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, name = %s, handle = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, name, handle, flag,)) def handle_syscall_entry_prlimit64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] resource = event["resource"] new_rlim = event["new_rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, resource = %s, new_rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, resource, new_rlim,)) def handle_syscall_entry_fanotify_mark(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fanotify_fd = event["fanotify_fd"] flags = event["flags"] mask = event["mask"] dfd = event["dfd"] pathname = event["pathname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fanotify_fd = %s, flags = %s, mask = %s, dfd = %s, pathname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fanotify_fd, flags, mask, dfd, pathname,)) def handle_syscall_entry_fanotify_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] event_f_flags = event["event_f_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s, event_f_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags, event_f_flags,)) def handle_syscall_entry_recvmmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vlen = event["vlen"] flags = event["flags"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vlen = %s, flags = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vlen, flags, timeout,)) def handle_syscall_entry_perf_event_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] attr_uptr = event["attr_uptr"] pid = event["pid"] cpu = event["cpu"] group_fd = event["group_fd"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { attr_uptr = %s, pid = %s, cpu = %s, group_fd = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, attr_uptr, pid, cpu, group_fd, flags,)) def handle_syscall_entry_rt_tgsigqueueinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tgid = event["tgid"] pid = event["pid"] sig = event["sig"] uinfo = event["uinfo"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tgid = %s, pid = %s, sig = %s, uinfo = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tgid, pid, sig, uinfo,)) def handle_syscall_entry_pwritev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vec = event["vec"] vlen = event["vlen"] pos_l = event["pos_l"] pos_h = event["pos_h"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vec = %s, vlen = %s, pos_l = %s, pos_h = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vec, vlen, pos_l, pos_h,)) def handle_syscall_entry_preadv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vlen = event["vlen"] pos_l = event["pos_l"] pos_h = event["pos_h"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vlen = %s, pos_l = %s, pos_h = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vlen, pos_l, pos_h,)) def handle_syscall_entry_inotify_init1(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags,)) def handle_syscall_entry_pipe2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags,)) def handle_syscall_entry_dup3(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldfd = event["oldfd"] newfd = event["newfd"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldfd = %s, newfd = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldfd, newfd, flags,)) def handle_syscall_entry_epoll_create1(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags,)) def handle_syscall_entry_eventfd2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] count = event["count"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { count = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, count, flags,)) def handle_syscall_entry_signalfd4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufd = event["ufd"] user_mask = event["user_mask"] sizemask = event["sizemask"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufd = %s, user_mask = %s, sizemask = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufd, user_mask, sizemask, flags,)) def handle_syscall_entry_accept4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] upeer_addrlen = event["upeer_addrlen"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, upeer_addrlen = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, upeer_addrlen, flags,)) def handle_syscall_entry_timerfd_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufd = event["ufd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufd,)) def handle_syscall_entry_timerfd_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufd = event["ufd"] flags = event["flags"] utmr = event["utmr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufd = %s, flags = %s, utmr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufd, flags, utmr,)) def handle_syscall_entry_fallocate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] mode = event["mode"] offset = event["offset"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, mode = %s, offset = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, mode, offset, len,)) def handle_syscall_entry_eventfd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, count,)) def handle_syscall_entry_timerfd_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clockid = event["clockid"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clockid = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clockid, flags,)) def handle_syscall_entry_signalfd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufd = event["ufd"] user_mask = event["user_mask"] sizemask = event["sizemask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufd = %s, user_mask = %s, sizemask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufd, user_mask, sizemask,)) def handle_syscall_entry_epoll_pwait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] epfd = event["epfd"] maxevents = event["maxevents"] timeout = event["timeout"] sigmask = event["sigmask"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { epfd = %s, maxevents = %s, timeout = %s, sigmask = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, epfd, maxevents, timeout, sigmask, sigsetsize,)) def handle_syscall_entry_utimensat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] utimes = event["utimes"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, utimes = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, utimes, flags,)) def handle_syscall_entry_move_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] nr_pages = event["nr_pages"] pages = event["pages"] nodes = event["nodes"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, nr_pages = %s, pages = %s, nodes = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, nr_pages, pages, nodes, flags,)) def handle_syscall_entry_vmsplice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] iov = event["iov"] nr_segs = event["nr_segs"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, iov = %s, nr_segs = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, iov, nr_segs, flags,)) def handle_syscall_entry_sync_file_range(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] offset = event["offset"] nbytes = event["nbytes"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, offset = %s, nbytes = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, offset, nbytes, flags,)) def handle_syscall_entry_tee(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fdin = event["fdin"] fdout = event["fdout"] len = event["len"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fdin = %s, fdout = %s, len = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fdin, fdout, len, flags,)) def handle_syscall_entry_splice(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd_in = event["fd_in"] off_in = event["off_in"] fd_out = event["fd_out"] off_out = event["off_out"] len = event["len"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd_in = %s, off_in = %s, fd_out = %s, off_out = %s, len = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd_in, off_in, fd_out, off_out, len, flags,)) def handle_syscall_entry_get_robust_list(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_syscall_entry_set_robust_list(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] head = event["head"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { head = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, head, len,)) def handle_syscall_entry_unshare(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] unshare_flags = event["unshare_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { unshare_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, unshare_flags,)) def handle_syscall_entry_ppoll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufds = event["ufds"] nfds = event["nfds"] tsp = event["tsp"] sigmask = event["sigmask"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufds = %s, nfds = %s, tsp = %s, sigmask = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufds, nfds, tsp, sigmask, sigsetsize,)) def handle_syscall_entry_pselect6(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] n = event["n"] inp = event["inp"] outp = event["outp"] exp = event["exp"] tsp = event["tsp"] sig = event["sig"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { n = %s, inp = %s, outp = %s, exp = %s, tsp = %s, sig = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, n, inp, outp, exp, tsp, sig,)) def handle_syscall_entry_faccessat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, mode,)) def handle_syscall_entry_fchmodat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, mode,)) def handle_syscall_entry_readlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] pathname = event["pathname"] bufsiz = event["bufsiz"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, pathname = %s, bufsiz = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, pathname, bufsiz,)) def handle_syscall_entry_symlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldname = event["oldname"] newdfd = event["newdfd"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldname = %s, newdfd = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldname, newdfd, newname,)) def handle_syscall_entry_linkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] olddfd = event["olddfd"] oldname = event["oldname"] newdfd = event["newdfd"] newname = event["newname"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { olddfd = %s, oldname = %s, newdfd = %s, newname = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, olddfd, oldname, newdfd, newname, flags,)) def handle_syscall_entry_renameat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] olddfd = event["olddfd"] oldname = event["oldname"] newdfd = event["newdfd"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { olddfd = %s, oldname = %s, newdfd = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, olddfd, oldname, newdfd, newname,)) def handle_syscall_entry_unlinkat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] pathname = event["pathname"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, pathname = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, pathname, flag,)) def handle_syscall_entry_newfstatat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, flag,)) def handle_syscall_entry_futimesat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] utimes = event["utimes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, utimes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, utimes,)) def handle_syscall_entry_fchownat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] user = event["user"] group = event["group"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, user = %s, group = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, user, group, flag,)) def handle_syscall_entry_mknodat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] mode = event["mode"] dev = event["dev"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, mode = %s, dev = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, mode, dev,)) def handle_syscall_entry_mkdirat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] pathname = event["pathname"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, pathname = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, pathname, mode,)) def handle_syscall_entry_openat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dfd = event["dfd"] filename = event["filename"] flags = event["flags"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dfd = %s, filename = %s, flags = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dfd, filename, flags, mode,)) def handle_syscall_entry_migrate_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] maxnode = event["maxnode"] old_nodes = event["old_nodes"] new_nodes = event["new_nodes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, maxnode = %s, old_nodes = %s, new_nodes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, maxnode, old_nodes, new_nodes,)) def handle_syscall_entry_inotify_rm_watch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] wd = event["wd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, wd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, wd,)) def handle_syscall_entry_inotify_add_watch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] pathname = event["pathname"] mask = event["mask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, pathname = %s, mask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, pathname, mask,)) def handle_syscall_entry_inotify_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_ioprio_get(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] who = event["who"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, who = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, who,)) def handle_syscall_entry_ioprio_set(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] who = event["who"] ioprio = event["ioprio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, who = %s, ioprio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, who, ioprio,)) def handle_syscall_entry_keyctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] option = event["option"] arg2 = event["arg2"] arg3 = event["arg3"] arg4 = event["arg4"] arg5 = event["arg5"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { option = %s, arg2 = %s, arg3 = %s, arg4 = %s, arg5 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, option, arg2, arg3, arg4, arg5,)) def handle_syscall_entry_request_key(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] _type = event["_type"] _description = event["_description"] _callout_info = event["_callout_info"] destringid = event["destringid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { _type = %s, _description = %s, _callout_info = %s, destringid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, _type, _description, _callout_info, destringid,)) def handle_syscall_entry_add_key(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] _type = event["_type"] _description = event["_description"] _payload = event["_payload"] plen = event["plen"] ringid = event["ringid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { _type = %s, _description = %s, _payload = %s, plen = %s, ringid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, _type, _description, _payload, plen, ringid,)) def handle_syscall_entry_waitid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] upid = event["upid"] options = event["options"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, upid = %s, options = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, upid, options,)) def handle_syscall_entry_kexec_load(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] entry = event["entry"] nr_segments = event["nr_segments"] segments = event["segments"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { entry = %s, nr_segments = %s, segments = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, entry, nr_segments, segments, flags,)) def handle_syscall_entry_mq_getsetattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mqdes = event["mqdes"] u_mqstat = event["u_mqstat"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mqdes = %s, u_mqstat = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mqdes, u_mqstat,)) def handle_syscall_entry_mq_notify(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mqdes = event["mqdes"] u_notification = event["u_notification"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mqdes = %s, u_notification = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mqdes, u_notification,)) def handle_syscall_entry_mq_timedreceive(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mqdes = event["mqdes"] msg_len = event["msg_len"] u_abs_timeout = event["u_abs_timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mqdes = %s, msg_len = %s, u_abs_timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mqdes, msg_len, u_abs_timeout,)) def handle_syscall_entry_mq_timedsend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mqdes = event["mqdes"] u_msg_ptr = event["u_msg_ptr"] msg_len = event["msg_len"] msg_prio = event["msg_prio"] u_abs_timeout = event["u_abs_timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mqdes = %s, u_msg_ptr = %s, msg_len = %s, msg_prio = %s, u_abs_timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mqdes, u_msg_ptr, msg_len, msg_prio, u_abs_timeout,)) def handle_syscall_entry_mq_unlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] u_name = event["u_name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { u_name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, u_name,)) def handle_syscall_entry_mq_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] u_name = event["u_name"] oflag = event["oflag"] mode = event["mode"] u_attr = event["u_attr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { u_name = %s, oflag = %s, mode = %s, u_attr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, u_name, oflag, mode, u_attr,)) def handle_syscall_entry_get_mempolicy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] maxnode = event["maxnode"] addr = event["addr"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { maxnode = %s, addr = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, maxnode, addr, flags,)) def handle_syscall_entry_set_mempolicy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mode = event["mode"] nmask = event["nmask"] maxnode = event["maxnode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mode = %s, nmask = %s, maxnode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mode, nmask, maxnode,)) def handle_syscall_entry_mbind(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] mode = event["mode"] nmask = event["nmask"] maxnode = event["maxnode"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s, mode = %s, nmask = %s, maxnode = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len, mode, nmask, maxnode, flags,)) def handle_syscall_entry_utimes(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] utimes = event["utimes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, utimes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, utimes,)) def handle_syscall_entry_tgkill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tgid = event["tgid"] pid = event["pid"] sig = event["sig"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tgid = %s, pid = %s, sig = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tgid, pid, sig,)) def handle_syscall_entry_epoll_ctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] epfd = event["epfd"] op = event["op"] fd = event["fd"] _event = event["event"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { epfd = %s, op = %s, fd = %s, event = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, epfd, op, fd, _event,)) def handle_syscall_entry_epoll_wait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] epfd = event["epfd"] maxevents = event["maxevents"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { epfd = %s, maxevents = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, epfd, maxevents, timeout,)) def handle_syscall_entry_exit_group(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] error_code = event["error_code"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { error_code = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, error_code,)) def handle_syscall_entry_clock_nanosleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] flags = event["flags"] rqtp = event["rqtp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s, flags = %s, rqtp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock, flags, rqtp,)) def handle_syscall_entry_clock_getres(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock,)) def handle_syscall_entry_clock_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock,)) def handle_syscall_entry_clock_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] tp = event["tp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s, tp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock, tp,)) def handle_syscall_entry_timer_delete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer_id = event["timer_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer_id,)) def handle_syscall_entry_timer_getoverrun(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer_id = event["timer_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer_id,)) def handle_syscall_entry_timer_gettime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer_id = event["timer_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer_id,)) def handle_syscall_entry_timer_settime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer_id = event["timer_id"] flags = event["flags"] new_setting = event["new_setting"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer_id = %s, flags = %s, new_setting = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer_id, flags, new_setting,)) def handle_syscall_entry_timer_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which_clock = event["which_clock"] timer_event_spec = event["timer_event_spec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which_clock = %s, timer_event_spec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which_clock, timer_event_spec,)) def handle_syscall_entry_fadvise64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] offset = event["offset"] len = event["len"] advice = event["advice"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, offset = %s, len = %s, advice = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, offset, len, advice,)) def handle_syscall_entry_semtimedop(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] semid = event["semid"] tsops = event["tsops"] nsops = event["nsops"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { semid = %s, tsops = %s, nsops = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, semid, tsops, nsops, timeout,)) def handle_syscall_entry_restart_syscall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_set_tid_address(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tidptr = event["tidptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tidptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tidptr,)) def handle_syscall_entry_getdents64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, count,)) def handle_syscall_entry_remap_file_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] size = event["size"] prot = event["prot"] pgoff = event["pgoff"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, size = %s, prot = %s, pgoff = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, size, prot, pgoff, flags,)) def handle_syscall_entry_epoll_create(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, size,)) def handle_syscall_entry_lookup_dcookie(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] cookie64 = event["cookie64"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { cookie64 = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, cookie64, len,)) def handle_syscall_entry_io_cancel(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ctx_id = event["ctx_id"] iocb = event["iocb"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ctx_id = %s, iocb = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ctx_id, iocb,)) def handle_syscall_entry_io_submit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ctx_id = event["ctx_id"] nr = event["nr"] iocbpp = event["iocbpp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ctx_id = %s, nr = %s, iocbpp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ctx_id, nr, iocbpp,)) def handle_syscall_entry_io_getevents(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ctx_id = event["ctx_id"] min_nr = event["min_nr"] nr = event["nr"] timeout = event["timeout"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ctx_id = %s, min_nr = %s, nr = %s, timeout = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ctx_id, min_nr, nr, timeout,)) def handle_syscall_entry_io_destroy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ctx = event["ctx"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ctx = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ctx,)) def handle_syscall_entry_io_setup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_events = event["nr_events"] ctxp = event["ctxp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_events = %s, ctxp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_events, ctxp,)) def handle_syscall_entry_sched_getaffinity(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, len,)) def handle_syscall_entry_sched_setaffinity(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] len = event["len"] user_mask_ptr = event["user_mask_ptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, len = %s, user_mask_ptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, len, user_mask_ptr,)) def handle_syscall_entry_futex(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uaddr = event["uaddr"] op = event["op"] val = event["val"] utime = event["utime"] uaddr2 = event["uaddr2"] val3 = event["val3"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uaddr = %s, op = %s, val = %s, utime = %s, uaddr2 = %s, val3 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uaddr, op, val, utime, uaddr2, val3,)) def handle_syscall_entry_time(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_tkill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] sig = event["sig"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, sig = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, sig,)) def handle_syscall_entry_fremovexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, name,)) def handle_syscall_entry_lremovexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name,)) def handle_syscall_entry_removexattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name,)) def handle_syscall_entry_flistxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, size,)) def handle_syscall_entry_llistxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, size,)) def handle_syscall_entry_listxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, size,)) def handle_syscall_entry_fgetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] name = event["name"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, name = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, name, size,)) def handle_syscall_entry_lgetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name, size,)) def handle_syscall_entry_getxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name, size,)) def handle_syscall_entry_fsetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] name = event["name"] value = event["value"] size = event["size"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, name = %s, value = %s, size = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, name, value, size, flags,)) def handle_syscall_entry_lsetxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] value = event["value"] size = event["size"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s, value = %s, size = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name, value, size, flags,)) def handle_syscall_entry_setxattr(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] name = event["name"] value = event["value"] size = event["size"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, name = %s, value = %s, size = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, name, value, size, flags,)) def handle_syscall_entry_readahead(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] offset = event["offset"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, offset = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, offset, count,)) def handle_syscall_entry_gettid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_quotactl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] cmd = event["cmd"] special = event["special"] id = event["id"] addr = event["addr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { cmd = %s, special = %s, id = %s, addr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, cmd, special, id, addr,)) def handle_syscall_entry_delete_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name_user = event["name_user"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name_user = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name_user, flags,)) def handle_syscall_entry_init_module(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] umod = event["umod"] len = event["len"] uargs = event["uargs"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { umod = %s, len = %s, uargs = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, umod, len, uargs,)) def handle_syscall_entry_setdomainname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, len,)) def handle_syscall_entry_sethostname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, len,)) def handle_syscall_entry_reboot(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] magic1 = event["magic1"] magic2 = event["magic2"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { magic1 = %s, magic2 = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, magic1, magic2, cmd, arg,)) def handle_syscall_entry_swapoff(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] specialfile = event["specialfile"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { specialfile = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, specialfile,)) def handle_syscall_entry_swapon(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] specialfile = event["specialfile"] swap_flags = event["swap_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { specialfile = %s, swap_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, specialfile, swap_flags,)) def handle_syscall_entry_umount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, flags,)) def handle_syscall_entry_mount(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev_name = event["dev_name"] dir_name = event["dir_name"] type = event["type"] flags = event["flags"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev_name = %s, dir_name = %s, type = %s, flags = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev_name, dir_name, type, flags, data,)) def handle_syscall_entry_settimeofday(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tv = event["tv"] tz = event["tz"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tv = %s, tz = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tv, tz,)) def handle_syscall_entry_acct(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_syscall_entry_sync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_chroot(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename,)) def handle_syscall_entry_setrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] resource = event["resource"] rlim = event["rlim"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { resource = %s, rlim = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, resource, rlim,)) def handle_syscall_entry_adjtimex(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] txc_p = event["txc_p"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { txc_p = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, txc_p,)) def handle_syscall_entry_prctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] option = event["option"] arg2 = event["arg2"] arg3 = event["arg3"] arg4 = event["arg4"] arg5 = event["arg5"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { option = %s, arg2 = %s, arg3 = %s, arg4 = %s, arg5 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, option, arg2, arg3, arg4, arg5,)) def handle_syscall_entry_sysctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, args,)) def handle_syscall_entry_pivot_root(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] new_root = event["new_root"] put_old = event["put_old"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { new_root = %s, put_old = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, new_root, put_old,)) def handle_syscall_entry_vhangup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_munlockall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_mlockall(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, flags,)) def handle_syscall_entry_munlock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len,)) def handle_syscall_entry_mlock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len,)) def handle_syscall_entry_sched_rr_get_interval(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_syscall_entry_sched_get_priority_min(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] policy = event["policy"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { policy = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, policy,)) def handle_syscall_entry_sched_get_priority_max(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] policy = event["policy"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { policy = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, policy,)) def handle_syscall_entry_sched_getscheduler(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_syscall_entry_sched_setscheduler(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] policy = event["policy"] param = event["param"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, policy = %s, param = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, policy, param,)) def handle_syscall_entry_sched_getparam(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_syscall_entry_sched_setparam(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] param = event["param"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, param = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, param,)) def handle_syscall_entry_setpriority(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] who = event["who"] niceval = event["niceval"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, who = %s, niceval = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, who, niceval,)) def handle_syscall_entry_getpriority(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] who = event["who"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, who = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, who,)) def handle_syscall_entry_sysfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] option = event["option"] arg1 = event["arg1"] arg2 = event["arg2"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { option = %s, arg1 = %s, arg2 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, option, arg1, arg2,)) def handle_syscall_entry_fstatfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_syscall_entry_statfs(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname,)) def handle_syscall_entry_ustat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev,)) def handle_syscall_entry_personality(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] personality = event["personality"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { personality = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, personality,)) def handle_syscall_entry_mknod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] mode = event["mode"] dev = event["dev"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, mode = %s, dev = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, mode, dev,)) def handle_syscall_entry_utime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] times = event["times"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, times = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, times,)) def handle_syscall_entry_sigaltstack(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uss = event["uss"] uoss = event["uoss"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uss = %s, uoss = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uss, uoss,)) def handle_syscall_entry_rt_sigsuspend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] unewset = event["unewset"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { unewset = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, unewset, sigsetsize,)) def handle_syscall_entry_rt_sigqueueinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] sig = event["sig"] uinfo = event["uinfo"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, sig = %s, uinfo = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, sig, uinfo,)) def handle_syscall_entry_rt_sigtimedwait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uts = event["uts"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uts = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uts, sigsetsize,)) def handle_syscall_entry_rt_sigpending(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, sigsetsize,)) def handle_syscall_entry_getsid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_syscall_entry_setfsgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gid = event["gid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gid,)) def handle_syscall_entry_setfsuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uid = event["uid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uid,)) def handle_syscall_entry_getpgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid,)) def handle_syscall_entry_getresgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_setresgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rgid = event["rgid"] egid = event["egid"] sgid = event["sgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rgid = %s, egid = %s, sgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rgid, egid, sgid,)) def handle_syscall_entry_getresuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_setresuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ruid = event["ruid"] euid = event["euid"] suid = event["suid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ruid = %s, euid = %s, suid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ruid, euid, suid,)) def handle_syscall_entry_setgroups(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gidsetsize = event["gidsetsize"] grouplist = event["grouplist"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gidsetsize = %s, grouplist = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gidsetsize, grouplist,)) def handle_syscall_entry_getgroups(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gidsetsize = event["gidsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gidsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gidsetsize,)) def handle_syscall_entry_setregid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rgid = event["rgid"] egid = event["egid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rgid = %s, egid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rgid, egid,)) def handle_syscall_entry_setreuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ruid = event["ruid"] euid = event["euid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ruid = %s, euid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ruid, euid,)) def handle_syscall_entry_setsid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_getpgrp(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_getppid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_setpgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] pgid = event["pgid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, pgid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, pgid,)) def handle_syscall_entry_getegid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_geteuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_setgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gid = event["gid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gid,)) def handle_syscall_entry_setuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] uid = event["uid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { uid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, uid,)) def handle_syscall_entry_getgid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_syslog(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] type = event["type"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { type = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, type, len,)) def handle_syscall_entry_getuid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_ptrace(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] request = event["request"] pid = event["pid"] addr = event["addr"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { request = %s, pid = %s, addr = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, request, pid, addr, data,)) def handle_syscall_entry_times(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_sysinfo(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_getrusage(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] who = event["who"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { who = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, who,)) def handle_syscall_entry_getrlimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] resource = event["resource"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { resource = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, resource,)) def handle_syscall_entry_gettimeofday(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_umask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] mask = event["mask"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { mask = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, mask,)) def handle_syscall_entry_lchown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, user, group,)) def handle_syscall_entry_fchown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, user, group,)) def handle_syscall_entry_chown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] user = event["user"] group = event["group"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, user = %s, group = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, user, group,)) def handle_syscall_entry_fchmod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, mode,)) def handle_syscall_entry_chmod(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, mode,)) def handle_syscall_entry_readlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] path = event["path"] bufsiz = event["bufsiz"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { path = %s, bufsiz = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, path, bufsiz,)) def handle_syscall_entry_symlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldname = event["oldname"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldname = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldname, newname,)) def handle_syscall_entry_unlink(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname,)) def handle_syscall_entry_link(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldname = event["oldname"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldname = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldname, newname,)) def handle_syscall_entry_creat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, mode,)) def handle_syscall_entry_rmdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname,)) def handle_syscall_entry_mkdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pathname = event["pathname"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pathname = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pathname, mode,)) def handle_syscall_entry_rename(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldname = event["oldname"] newname = event["newname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldname = %s, newname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldname, newname,)) def handle_syscall_entry_fchdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_syscall_entry_chdir(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename,)) def handle_syscall_entry_getcwd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, size,)) def handle_syscall_entry_getdents(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, count,)) def handle_syscall_entry_ftruncate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] length = event["length"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, length = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, length,)) def handle_syscall_entry_truncate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] path = event["path"] length = event["length"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { path = %s, length = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, path, length,)) def handle_syscall_entry_fdatasync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_syscall_entry_fsync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_syscall_entry_flock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] cmd = event["cmd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, cmd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, cmd,)) def handle_syscall_entry_fcntl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, cmd, arg,)) def handle_syscall_entry_msgctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] msqid = event["msqid"] cmd = event["cmd"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { msqid = %s, cmd = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, msqid, cmd, buf,)) def handle_syscall_entry_msgrcv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] msqid = event["msqid"] msgsz = event["msgsz"] msgtyp = event["msgtyp"] msgflg = event["msgflg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { msqid = %s, msgsz = %s, msgtyp = %s, msgflg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, msqid, msgsz, msgtyp, msgflg,)) def handle_syscall_entry_msgsnd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] msqid = event["msqid"] msgp = event["msgp"] msgsz = event["msgsz"] msgflg = event["msgflg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { msqid = %s, msgp = %s, msgsz = %s, msgflg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, msqid, msgp, msgsz, msgflg,)) def handle_syscall_entry_msgget(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] key = event["key"] msgflg = event["msgflg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { key = %s, msgflg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, key, msgflg,)) def handle_syscall_entry_shmdt(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] shmaddr = event["shmaddr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { shmaddr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, shmaddr,)) def handle_syscall_entry_semctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] semid = event["semid"] semnum = event["semnum"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { semid = %s, semnum = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, semid, semnum, cmd, arg,)) def handle_syscall_entry_semop(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] semid = event["semid"] tsops = event["tsops"] nsops = event["nsops"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { semid = %s, tsops = %s, nsops = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, semid, tsops, nsops,)) def handle_syscall_entry_semget(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] key = event["key"] nsems = event["nsems"] semflg = event["semflg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { key = %s, nsems = %s, semflg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, key, nsems, semflg,)) def handle_syscall_entry_newuname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_kill(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] sig = event["sig"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, sig = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, sig,)) def handle_syscall_entry_wait4(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] upid = event["upid"] options = event["options"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { upid = %s, options = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, upid, options,)) def handle_syscall_entry_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] error_code = event["error_code"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { error_code = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, error_code,)) def handle_syscall_entry_execve(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] argv = event["argv"] envp = event["envp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, argv = %s, envp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, argv, envp,)) def handle_syscall_entry_clone(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clone_flags = event["clone_flags"] newsp = event["newsp"] parent_tid = event["parent_tid"] child_tid = event["child_tid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clone_flags = %s, newsp = %s, parent_tid = %s, child_tid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clone_flags, newsp, parent_tid, child_tid,)) def handle_syscall_entry_getsockopt(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] level = event["level"] optname = event["optname"] optlen = event["optlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, level = %s, optname = %s, optlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, level, optname, optlen,)) def handle_syscall_entry_setsockopt(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] level = event["level"] optname = event["optname"] optval = event["optval"] optlen = event["optlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, level = %s, optname = %s, optval = %s, optlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, level, optname, optval, optlen,)) def handle_syscall_entry_socketpair(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] family = event["family"] type = event["type"] protocol = event["protocol"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { family = %s, type = %s, protocol = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, family, type, protocol,)) def handle_syscall_entry_getpeername(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] usockaddr_len = event["usockaddr_len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, usockaddr_len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, usockaddr_len,)) def handle_syscall_entry_getsockname(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] usockaddr_len = event["usockaddr_len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, usockaddr_len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, usockaddr_len,)) def handle_syscall_entry_listen(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] backlog = event["backlog"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, backlog = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, backlog,)) def handle_syscall_entry_bind(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] umyaddr = event["umyaddr"] addrlen = event["addrlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, umyaddr = %s, addrlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, umyaddr, addrlen,)) def handle_syscall_entry_shutdown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] how = event["how"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, how = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, how,)) def handle_syscall_entry_recvmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] msg = event["msg"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, msg = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, msg, flags,)) def handle_syscall_entry_sendmsg(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] msg = event["msg"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, msg = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, msg, flags,)) def handle_syscall_entry_recvfrom(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] size = event["size"] flags = event["flags"] addr_len = event["addr_len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, size = %s, flags = %s, addr_len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, size, flags, addr_len,)) def handle_syscall_entry_sendto(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] buff = event["buff"] len = event["len"] flags = event["flags"] addr = event["addr"] addr_len = event["addr_len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, buff = %s, len = %s, flags = %s, addr = %s, addr_len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, buff, len, flags, addr, addr_len,)) def handle_syscall_entry_accept(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] upeer_sockaddr = event["upeer_sockaddr"] upeer_addrlen = event["upeer_addrlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, upeer_sockaddr = %s, upeer_addrlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, upeer_sockaddr, upeer_addrlen,)) def handle_syscall_entry_connect(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] uservaddr = event["uservaddr"] addrlen = event["addrlen"] family = event["family"] dport = event["dport"] _v4addr_length = event["_v4addr_length"] v4addr = event["v4addr"] _v6addr_length = event["_v6addr_length"] v6addr = event["v6addr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, uservaddr = %s, addrlen = %s, family = %s, dport = %s, _v4addr_length = %s, v4addr = %s, _v6addr_length = %s, v6addr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, uservaddr, addrlen, family, dport, _v4addr_length, v4addr, _v6addr_length, v6addr,)) def handle_syscall_entry_socket(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] family = event["family"] type = event["type"] protocol = event["protocol"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { family = %s, type = %s, protocol = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, family, type, protocol,)) def handle_syscall_entry_sendfile64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] out_fd = event["out_fd"] in_fd = event["in_fd"] offset = event["offset"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { out_fd = %s, in_fd = %s, offset = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, out_fd, in_fd, offset, count,)) def handle_syscall_entry_getpid(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_setitimer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, value,)) def handle_syscall_entry_alarm(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] seconds = event["seconds"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { seconds = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, seconds,)) def handle_syscall_entry_getitimer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which,)) def handle_syscall_entry_nanosleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rqtp = event["rqtp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rqtp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rqtp,)) def handle_syscall_entry_pause(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_dup2(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] oldfd = event["oldfd"] newfd = event["newfd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { oldfd = %s, newfd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, oldfd, newfd,)) def handle_syscall_entry_dup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fildes = event["fildes"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fildes = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fildes,)) def handle_syscall_entry_shmctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] shmid = event["shmid"] cmd = event["cmd"] buf = event["buf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { shmid = %s, cmd = %s, buf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, shmid, cmd, buf,)) def handle_syscall_entry_shmat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] shmid = event["shmid"] shmaddr = event["shmaddr"] shmflg = event["shmflg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { shmid = %s, shmaddr = %s, shmflg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, shmid, shmaddr, shmflg,)) def handle_syscall_entry_shmget(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] key = event["key"] size = event["size"] shmflg = event["shmflg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { key = %s, size = %s, shmflg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, key, size, shmflg,)) def handle_syscall_entry_madvise(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len_in = event["len_in"] behavior = event["behavior"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len_in = %s, behavior = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len_in, behavior,)) def handle_syscall_entry_mincore(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len,)) def handle_syscall_entry_msync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len, flags,)) def handle_syscall_entry_mremap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] addr = event["addr"] old_len = event["old_len"] new_len = event["new_len"] flags = event["flags"] new_addr = event["new_addr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { addr = %s, old_len = %s, new_len = %s, flags = %s, new_addr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, addr, old_len, new_len, flags, new_addr,)) def handle_syscall_entry_sched_yield(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_select(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] n = event["n"] inp = event["inp"] outp = event["outp"] exp = event["exp"] tvp = event["tvp"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { n = %s, inp = %s, outp = %s, exp = %s, tvp = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, n, inp, outp, exp, tvp,)) def handle_syscall_entry_pipe(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_syscall_entry_access(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, mode,)) def handle_syscall_entry_writev(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vec = event["vec"] vlen = event["vlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vec = %s, vlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vec, vlen,)) def handle_syscall_entry_readv(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] vec = event["vec"] vlen = event["vlen"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, vec = %s, vlen = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, vec, vlen,)) def handle_syscall_entry_pwrite64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] buf = event["buf"] count = event["count"] pos = event["pos"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, buf = %s, count = %s, pos = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, buf, count, pos,)) def handle_syscall_entry_pread64(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] count = event["count"] pos = event["pos"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, count = %s, pos = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, count, pos,)) def handle_syscall_entry_ioctl(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] cmd = event["cmd"] arg = event["arg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, cmd = %s, arg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, cmd, arg,)) def handle_syscall_entry_rt_sigprocmask(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] how = event["how"] nset = event["nset"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { how = %s, nset = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, how, nset, sigsetsize,)) def handle_syscall_entry_rt_sigaction(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] sig = event["sig"] act = event["act"] sigsetsize = event["sigsetsize"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { sig = %s, act = %s, sigsetsize = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, sig, act, sigsetsize,)) def handle_syscall_entry_brk(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] brk = event["brk"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { brk = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, brk,)) def handle_syscall_entry_munmap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] addr = event["addr"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { addr = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, addr, len,)) def handle_syscall_entry_mprotect(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] start = event["start"] len = event["len"] prot = event["prot"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { start = %s, len = %s, prot = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, start, len, prot,)) def handle_syscall_entry_mmap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] addr = event["addr"] len = event["len"] prot = event["prot"] flags = event["flags"] fd = event["fd"] offset = event["offset"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { addr = %s, len = %s, prot = %s, flags = %s, fd = %s, offset = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, addr, len, prot, flags, fd, offset,)) def handle_syscall_entry_lseek(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] offset = event["offset"] whence = event["whence"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, offset = %s, whence = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, offset, whence,)) def handle_syscall_entry_poll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ufds = event["ufds"] nfds = event["nfds"] timeout_msecs = event["timeout_msecs"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ufds = %s, nfds = %s, timeout_msecs = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ufds, nfds, timeout_msecs,)) def handle_syscall_entry_newlstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename,)) def handle_syscall_entry_newfstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_syscall_entry_newstat(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename,)) def handle_syscall_entry_close(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd,)) def handle_syscall_entry_open(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] flags = event["flags"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, flags = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, flags, mode,)) def handle_syscall_entry_write(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] buf = event["buf"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, buf = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, buf, count,)) def handle_syscall_entry_read(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] fd = event["fd"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { fd = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, fd, count,)) def handle_syscall_exit_unknown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] id = event["id"] ret = event["ret"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { id = %s, ret = %s, args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, id, ret, args,)) def handle_compat_syscall_exit_unknown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] id = event["id"] ret = event["ret"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { id = %s, ret = %s, args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, id, ret, args,)) def handle_compat_syscall_entry_unknown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] id = event["id"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { id = %s, args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, id, args,)) def handle_syscall_entry_unknown(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] id = event["id"] args = event["args"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { id = %s, args = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, id, args,)) def handle_lttng_logger(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] _msg_length = event["_msg_length"] msg = event["msg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { _msg_length = %s, msg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, _msg_length, msg,)) def handle_snd_soc_cache_sync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] status = event["status"] type = event["type"] id = event["id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, status = %s, type = %s, id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, status, type, id,)) def handle_snd_soc_jack_notify(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, val,)) def handle_snd_soc_jack_report(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] mask = event["mask"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, mask = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, mask, val,)) def handle_snd_soc_jack_irq(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_snd_soc_dapm_connected(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] paths = event["paths"] stream = event["stream"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { paths = %s, stream = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, paths, stream,)) def handle_snd_soc_dapm_input_path(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] wname = event["wname"] pname = event["pname"] psname = event["psname"] path_source = event["path_source"] path_connect = event["path_connect"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { wname = %s, pname = %s, psname = %s, path_source = %s, path_connect = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, wname, pname, psname, path_source, path_connect,)) def handle_snd_soc_dapm_output_path(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] wname = event["wname"] pname = event["pname"] psname = event["psname"] path_sink = event["path_sink"] path_connect = event["path_connect"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { wname = %s, pname = %s, psname = %s, path_sink = %s, path_connect = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, wname, pname, psname, path_sink, path_connect,)) def handle_snd_soc_dapm_walk_done(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] power_checks = event["power_checks"] path_checks = event["path_checks"] neighbour_checks = event["neighbour_checks"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, power_checks = %s, path_checks = %s, neighbour_checks = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, power_checks, path_checks, neighbour_checks,)) def handle_snd_soc_dapm_widget_event_done(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, val,)) def handle_snd_soc_dapm_widget_event_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, val,)) def handle_snd_soc_dapm_widget_power(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, val,)) def handle_snd_soc_dapm_done(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_snd_soc_dapm_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_snd_soc_bias_level_done(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, val,)) def handle_snd_soc_bias_level_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, val,)) def handle_snd_soc_preg_read(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] id = event["id"] reg = event["reg"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, id = %s, reg = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, id, reg, val,)) def handle_snd_soc_preg_write(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] id = event["id"] reg = event["reg"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, id = %s, reg = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, id, reg, val,)) def handle_snd_soc_reg_read(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] id = event["id"] reg = event["reg"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, id = %s, reg = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, id, reg, val,)) def handle_snd_soc_reg_write(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] id = event["id"] reg = event["reg"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, id = %s, reg = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, id, reg, val,)) def handle_block_rq_remap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] old_dev = event["old_dev"] old_sector = event["old_sector"] rwbs = event["rwbs"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, old_dev = %s, old_sector = %s, rwbs = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, old_dev, old_sector, rwbs,)) def handle_block_bio_remap(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] old_dev = event["old_dev"] old_sector = event["old_sector"] rwbs = event["rwbs"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, old_dev = %s, old_sector = %s, rwbs = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, old_dev, old_sector, rwbs,)) def handle_block_split(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] new_sector = event["new_sector"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, new_sector = %s, rwbs = %s, tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, new_sector, rwbs, tid, comm,)) def handle_block_unplug(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_rq = event["nr_rq"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_rq = %s, tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_rq, tid, comm,)) def handle_block_plug(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tid, comm,)) def handle_block_sleeprq(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, rwbs = %s, tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, rwbs, tid, comm,)) def handle_block_getrq(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, rwbs = %s, tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, rwbs, tid, comm,)) def handle_block_bio_queue(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, rwbs = %s, tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, rwbs, tid, comm,)) def handle_block_bio_frontmerge(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, rwbs = %s, tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, rwbs, tid, comm,)) def handle_block_bio_backmerge(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, rwbs = %s, tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, rwbs, tid, comm,)) def handle_block_bio_complete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] error = event["error"] rwbs = event["rwbs"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, error = %s, rwbs = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, error, rwbs,)) def handle_block_bio_bounce(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, rwbs = %s, tid = %s, comm = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, rwbs, tid, comm,)) def handle_block_rq_issue(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] bytes = event["bytes"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] _cmd_length = event["_cmd_length"] cmd = event["cmd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, bytes = %s, rwbs = %s, tid = %s, comm = %s, _cmd_length = %s, cmd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, bytes, rwbs, tid, comm, _cmd_length, cmd,)) def handle_block_rq_insert(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] bytes = event["bytes"] rwbs = event["rwbs"] tid = event["tid"] comm = event["comm"] _cmd_length = event["_cmd_length"] cmd = event["cmd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, bytes = %s, rwbs = %s, tid = %s, comm = %s, _cmd_length = %s, cmd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, bytes, rwbs, tid, comm, _cmd_length, cmd,)) def handle_block_rq_complete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] errors = event["errors"] rwbs = event["rwbs"] _cmd_length = event["_cmd_length"] cmd = event["cmd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, errors = %s, rwbs = %s, _cmd_length = %s, cmd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, errors, rwbs, _cmd_length, cmd,)) def handle_block_rq_requeue(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] errors = event["errors"] rwbs = event["rwbs"] _cmd_length = event["_cmd_length"] cmd = event["cmd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, errors = %s, rwbs = %s, _cmd_length = %s, cmd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, errors, rwbs, _cmd_length, cmd,)) def handle_block_rq_abort(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] nr_sector = event["nr_sector"] errors = event["errors"] rwbs = event["rwbs"] _cmd_length = event["_cmd_length"] cmd = event["cmd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, nr_sector = %s, errors = %s, rwbs = %s, _cmd_length = %s, cmd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, nr_sector, errors, rwbs, _cmd_length, cmd,)) def handle_block_dirty_buffer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, size,)) def handle_block_touch_buffer(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sector = event["sector"] size = event["size"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sector = %s, size = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sector, size,)) def handle_mm_compaction_migratepages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_migrated = event["nr_migrated"] nr_failed = event["nr_failed"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_migrated = %s, nr_failed = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_migrated, nr_failed,)) def handle_mm_compaction_isolate_freepages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_scanned = event["nr_scanned"] nr_taken = event["nr_taken"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_scanned = %s, nr_taken = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_scanned, nr_taken,)) def handle_mm_compaction_isolate_migratepages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_scanned = event["nr_scanned"] nr_taken = event["nr_taken"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_scanned = %s, nr_taken = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_scanned, nr_taken,)) def handle_gpio_value(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gpio = event["gpio"] get = event["get"] value = event["value"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gpio = %s, get = %s, value = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gpio, get, value,)) def handle_gpio_direction(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gpio = event["gpio"] _in = event["in"] err = event["err"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gpio = %s, in = %s, err = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gpio, _in, err,)) def handle_softirq_raise(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, vec,)) def handle_softirq_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, vec,)) def handle_softirq_entry(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] vec = event["vec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { vec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, vec,)) def handle_irq_handler_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] irq = event["irq"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { irq = %s, ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, irq, ret,)) def handle_irq_handler_entry(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] irq = event["irq"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { irq = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, irq, name,)) def handle_jbd2_write_superblock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] write_op = event["write_op"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, write_op = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, write_op,)) def handle_jbd2_update_log_tail(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] tail_sequence = event["tail_sequence"] first_tid = event["first_tid"] block_nr = event["block_nr"] freed = event["freed"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, tail_sequence = %s, first_tid = %s, block_nr = %s, freed = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, tail_sequence, first_tid, block_nr, freed,)) def handle_jbd2_checkpoint_stats(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] tid = event["tid"] chp_time = event["chp_time"] forced_to_close = event["forced_to_close"] written = event["written"] dropped = event["dropped"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, tid = %s, chp_time = %s, forced_to_close = %s, written = %s, dropped = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, tid, chp_time, forced_to_close, written, dropped,)) def handle_jbd2_run_stats(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] tid = event["tid"] wait = event["wait"] running = event["running"] locked = event["locked"] flushing = event["flushing"] logging = event["logging"] handle_count = event["handle_count"] blocks = event["blocks"] blocks_logged = event["blocks_logged"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, tid = %s, wait = %s, running = %s, locked = %s, flushing = %s, logging = %s, handle_count = %s, blocks = %s, blocks_logged = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, tid, wait, running, locked, flushing, logging, handle_count, blocks, blocks_logged,)) def handle_jbd2_submit_inode_data(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] ino = event["ino"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, ino = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, ino,)) def handle_jbd2_end_commit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sync_commit = event["sync_commit"] transaction = event["transaction"] head = event["head"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sync_commit = %s, transaction = %s, head = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sync_commit, transaction, head,)) def handle_jbd2_drop_transaction(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sync_commit = event["sync_commit"] transaction = event["transaction"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sync_commit = %s, transaction = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sync_commit, transaction,)) def handle_jbd2_commit_logging(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sync_commit = event["sync_commit"] transaction = event["transaction"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sync_commit = %s, transaction = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sync_commit, transaction,)) def handle_jbd2_commit_flushing(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sync_commit = event["sync_commit"] transaction = event["transaction"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sync_commit = %s, transaction = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sync_commit, transaction,)) def handle_jbd2_commit_locking(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sync_commit = event["sync_commit"] transaction = event["transaction"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sync_commit = %s, transaction = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sync_commit, transaction,)) def handle_jbd2_start_commit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] sync_commit = event["sync_commit"] transaction = event["transaction"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, sync_commit = %s, transaction = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, sync_commit, transaction,)) def handle_jbd2_checkpoint(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] result = event["result"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, result = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, result,)) def handle_mm_page_alloc_extfrag(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] page = event["page"] alloc_order = event["alloc_order"] fallback_order = event["fallback_order"] alloc_migratetype = event["alloc_migratetype"] fallback_migratetype = event["fallback_migratetype"] change_ownership = event["change_ownership"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { page = %s, alloc_order = %s, fallback_order = %s, alloc_migratetype = %s, fallback_migratetype = %s, change_ownership = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, page, alloc_order, fallback_order, alloc_migratetype, fallback_migratetype, change_ownership,)) def handle_mm_page_pcpu_drain(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] page = event["page"] order = event["order"] migratetype = event["migratetype"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { page = %s, order = %s, migratetype = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, page, order, migratetype,)) def handle_mm_page_alloc_zone_locked(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] page = event["page"] order = event["order"] migratetype = event["migratetype"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { page = %s, order = %s, migratetype = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, page, order, migratetype,)) def handle_mm_page_alloc(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] page = event["page"] order = event["order"] gfp_flags = event["gfp_flags"] migratetype = event["migratetype"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { page = %s, order = %s, gfp_flags = %s, migratetype = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, page, order, gfp_flags, migratetype,)) def handle_mm_page_free_batched(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] page = event["page"] cold = event["cold"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { page = %s, cold = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, page, cold,)) def handle_mm_page_free(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] page = event["page"] order = event["order"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { page = %s, order = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, page, order,)) def handle_kmem_cache_free(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] call_site = event["call_site"] ptr = event["ptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { call_site = %s, ptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, call_site, ptr,)) def handle_kmem_kfree(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] call_site = event["call_site"] ptr = event["ptr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { call_site = %s, ptr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, call_site, ptr,)) def handle_kmem_cache_alloc_node(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] call_site = event["call_site"] ptr = event["ptr"] bytes_req = event["bytes_req"] bytes_alloc = event["bytes_alloc"] gfp_flags = event["gfp_flags"] node = event["node"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { call_site = %s, ptr = %s, bytes_req = %s, bytes_alloc = %s, gfp_flags = %s, node = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node,)) def handle_kmem_kmalloc_node(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] call_site = event["call_site"] ptr = event["ptr"] bytes_req = event["bytes_req"] bytes_alloc = event["bytes_alloc"] gfp_flags = event["gfp_flags"] node = event["node"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { call_site = %s, ptr = %s, bytes_req = %s, bytes_alloc = %s, gfp_flags = %s, node = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, call_site, ptr, bytes_req, bytes_alloc, gfp_flags, node,)) def handle_kmem_cache_alloc(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] call_site = event["call_site"] ptr = event["ptr"] bytes_req = event["bytes_req"] bytes_alloc = event["bytes_alloc"] gfp_flags = event["gfp_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { call_site = %s, ptr = %s, bytes_req = %s, bytes_alloc = %s, gfp_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, call_site, ptr, bytes_req, bytes_alloc, gfp_flags,)) def handle_kmem_kmalloc(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] call_site = event["call_site"] ptr = event["ptr"] bytes_req = event["bytes_req"] bytes_alloc = event["bytes_alloc"] gfp_flags = event["gfp_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { call_site = %s, ptr = %s, bytes_req = %s, bytes_alloc = %s, gfp_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, call_site, ptr, bytes_req, bytes_alloc, gfp_flags,)) def handle_kvm_async_pf_completed(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] address = event["address"] gva = event["gva"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { address = %s, gva = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, address, gva,)) def handle_kvm_async_pf_ready(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] token = event["token"] gva = event["gva"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { token = %s, gva = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, token, gva,)) def handle_kvm_async_pf_not_present(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] token = event["token"] gva = event["gva"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { token = %s, gva = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, token, gva,)) def handle_kvm_async_pf_doublefault(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gva = event["gva"] gfn = event["gfn"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gva = %s, gfn = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gva, gfn,)) def handle_kvm_try_async_get_page(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gva = event["gva"] gfn = event["gfn"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gva = %s, gfn = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gva, gfn,)) def handle_kvm_age_page(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] hva = event["hva"] gfn = event["gfn"] referenced = event["referenced"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { hva = %s, gfn = %s, referenced = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, hva, gfn, referenced,)) def handle_kvm_fpu(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] load = event["load"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { load = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, load,)) def handle_kvm_mmio(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] type = event["type"] len = event["len"] gpa = event["gpa"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { type = %s, len = %s, gpa = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, type, len, gpa, val,)) def handle_kvm_ack_irq(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] irqchip = event["irqchip"] pin = event["pin"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { irqchip = %s, pin = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, irqchip, pin,)) def handle_kvm_msi_set_irq(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] address = event["address"] data = event["data"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { address = %s, data = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, address, data,)) def handle_kvm_ioapic_set_irq(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] e = event["e"] pin = event["pin"] coalesced = event["coalesced"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { e = %s, pin = %s, coalesced = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, e, pin, coalesced,)) def handle_kvm_set_irq(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] gsi = event["gsi"] level = event["level"] irq_source_id = event["irq_source_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { gsi = %s, level = %s, irq_source_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, gsi, level, irq_source_id,)) def handle_kvm_userspace_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] reason = event["reason"] errno = event["errno"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { reason = %s, errno = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, reason, errno,)) def handle_module_request(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ip = event["ip"] wait = event["wait"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ip = %s, wait = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ip, wait, name,)) def handle_module_put(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ip = event["ip"] refcnt = event["refcnt"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ip = %s, refcnt = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ip, refcnt, name,)) def handle_module_get(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ip = event["ip"] refcnt = event["refcnt"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ip = %s, refcnt = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ip, refcnt, name,)) def handle_module_free(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_module_load(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] taints = event["taints"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { taints = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, taints, name,)) def handle_napi_poll(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] napi = event["napi"] dev_name = event["dev_name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { napi = %s, dev_name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, napi, dev_name,)) def handle_netif_rx(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] skbaddr = event["skbaddr"] len = event["len"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { skbaddr = %s, len = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, skbaddr, len, name,)) def handle_netif_receive_skb(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] skbaddr = event["skbaddr"] len = event["len"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { skbaddr = %s, len = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, skbaddr, len, name,)) def handle_net_dev_queue(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] skbaddr = event["skbaddr"] len = event["len"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { skbaddr = %s, len = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, skbaddr, len, name,)) def handle_net_dev_xmit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] skbaddr = event["skbaddr"] len = event["len"] rc = event["rc"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { skbaddr = %s, len = %s, rc = %s, name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, skbaddr, len, rc, name,)) def handle_power_domain_target(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] state = event["state"] cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, state = %s, cpu_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, state, cpu_id,)) def handle_power_clock_set_rate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] state = event["state"] cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, state = %s, cpu_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, state, cpu_id,)) def handle_power_clock_disable(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] state = event["state"] cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, state = %s, cpu_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, state, cpu_id,)) def handle_power_clock_enable(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] state = event["state"] cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, state = %s, cpu_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, state, cpu_id,)) def handle_power_wakeup_source_deactivate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] state = event["state"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, state = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, state,)) def handle_power_wakeup_source_activate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] state = event["state"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, state = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, state,)) def handle_power_machine_suspend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] state = event["state"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { state = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, state,)) def handle_power_cpu_frequency(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] state = event["state"] cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { state = %s, cpu_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, state, cpu_id,)) def handle_power_cpu_idle(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] state = event["state"] cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { state = %s, cpu_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, state, cpu_id,)) def handle_console(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] _msg_length = event["_msg_length"] msg = event["msg"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { _msg_length = %s, msg = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, _msg_length, msg,)) def handle_random_extract_entropy_user(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pool_name = event["pool_name"] nbytes = event["nbytes"] entropy_count = event["entropy_count"] IP = event["IP"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pool_name = %s, nbytes = %s, entropy_count = %s, IP = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pool_name, nbytes, entropy_count, IP,)) def handle_random_extract_entropy(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pool_name = event["pool_name"] nbytes = event["nbytes"] entropy_count = event["entropy_count"] IP = event["IP"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pool_name = %s, nbytes = %s, entropy_count = %s, IP = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pool_name, nbytes, entropy_count, IP,)) def handle_random_get_random_bytes(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nbytes = event["nbytes"] IP = event["IP"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nbytes = %s, IP = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nbytes, IP,)) def handle_random_credit_entropy_bits(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pool_name = event["pool_name"] bits = event["bits"] entropy_count = event["entropy_count"] entropy_total = event["entropy_total"] IP = event["IP"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pool_name = %s, bits = %s, entropy_count = %s, entropy_total = %s, IP = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pool_name, bits, entropy_count, entropy_total, IP,)) def handle_random_mix_pool_bytes_nolock(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pool_name = event["pool_name"] bytes = event["bytes"] IP = event["IP"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pool_name = %s, bytes = %s, IP = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pool_name, bytes, IP,)) def handle_random_mix_pool_bytes(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pool_name = event["pool_name"] bytes = event["bytes"] IP = event["IP"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pool_name = %s, bytes = %s, IP = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pool_name, bytes, IP,)) def handle_rcu_utilization(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] s = event["s"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { s = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, s,)) def handle_regmap_cache_bypass(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, flag,)) def handle_regmap_cache_only(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, flag,)) def handle_regcache_sync(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] status = event["status"] type = event["type"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, status = %s, type = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, status, type,)) def handle_regmap_hw_write_done(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] reg = event["reg"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, reg = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, reg, count,)) def handle_regmap_hw_write_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] reg = event["reg"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, reg = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, reg, count,)) def handle_regmap_hw_read_done(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] reg = event["reg"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, reg = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, reg, count,)) def handle_regmap_hw_read_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] reg = event["reg"] count = event["count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, reg = %s, count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, reg, count,)) def handle_regmap_reg_read_cache(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] reg = event["reg"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, reg = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, reg, val,)) def handle_regmap_reg_read(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] reg = event["reg"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, reg = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, reg, val,)) def handle_regmap_reg_write(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] reg = event["reg"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, reg = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, reg, val,)) def handle_regulator_set_voltage_complete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] val = event["val"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, val = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, val,)) def handle_regulator_set_voltage(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] min = event["min"] max = event["max"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, min = %s, max = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, min, max,)) def handle_regulator_disable_complete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_regulator_disable(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_regulator_enable_complete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_regulator_enable_delay(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_regulator_enable(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_rpm_return_int(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] ip = event["ip"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, ip = %s, ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, ip, ret,)) def handle_rpm_idle(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] flags = event["flags"] usage_count = event["usage_count"] disable_depth = event["disable_depth"] runtime_auto = event["runtime_auto"] request_pending = event["request_pending"] irq_safe = event["irq_safe"] child_count = event["child_count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, flags = %s, usage_count = %s, disable_depth = %s, runtime_auto = %s, request_pending = %s, irq_safe = %s, child_count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, flags, usage_count, disable_depth, runtime_auto, request_pending, irq_safe, child_count,)) def handle_rpm_resume(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] flags = event["flags"] usage_count = event["usage_count"] disable_depth = event["disable_depth"] runtime_auto = event["runtime_auto"] request_pending = event["request_pending"] irq_safe = event["irq_safe"] child_count = event["child_count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, flags = %s, usage_count = %s, disable_depth = %s, runtime_auto = %s, request_pending = %s, irq_safe = %s, child_count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, flags, usage_count, disable_depth, runtime_auto, request_pending, irq_safe, child_count,)) def handle_rpm_suspend(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] flags = event["flags"] usage_count = event["usage_count"] disable_depth = event["disable_depth"] runtime_auto = event["runtime_auto"] request_pending = event["request_pending"] irq_safe = event["irq_safe"] child_count = event["child_count"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, flags = %s, usage_count = %s, disable_depth = %s, runtime_auto = %s, request_pending = %s, irq_safe = %s, child_count = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, flags, usage_count, disable_depth, runtime_auto, request_pending, irq_safe, child_count,)) def handle_sched_pi_setprio(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] oldprio = event["oldprio"] newprio = event["newprio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, oldprio = %s, newprio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, oldprio, newprio,)) def handle_sched_stat_runtime(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] runtime = event["runtime"] vruntime = event["vruntime"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, runtime = %s, vruntime = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, runtime, vruntime,)) def handle_sched_stat_blocked(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] delay = event["delay"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, delay = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, delay,)) def handle_sched_stat_iowait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] delay = event["delay"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, delay = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, delay,)) def handle_sched_stat_sleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] delay = event["delay"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, delay = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, delay,)) def handle_sched_stat_wait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] delay = event["delay"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, delay = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, delay,)) def handle_sched_process_exec(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] filename = event["filename"] tid = event["tid"] old_tid = event["old_tid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { filename = %s, tid = %s, old_tid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, filename, tid, old_tid,)) def handle_sched_process_fork(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] parent_comm = event["parent_comm"] parent_tid = event["parent_tid"] parent_pid = event["parent_pid"] child_comm = event["child_comm"] child_tid = event["child_tid"] child_pid = event["child_pid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { parent_comm = %s, parent_tid = %s, parent_pid = %s, child_comm = %s, child_tid = %s, child_pid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, parent_comm, parent_tid, parent_pid, child_comm, child_tid, child_pid,)) def handle_sched_process_wait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] prio = event["prio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, prio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, prio,)) def handle_sched_wait_task(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] prio = event["prio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, prio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, prio,)) def handle_sched_process_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] prio = event["prio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, prio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, prio,)) def handle_sched_process_free(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] prio = event["prio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, prio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, prio,)) def handle_sched_migrate_task(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] prio = event["prio"] orig_cpu = event["orig_cpu"] dest_cpu = event["dest_cpu"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, prio = %s, orig_cpu = %s, dest_cpu = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, prio, orig_cpu, dest_cpu,)) def handle_sched_switch(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] prev_comm = event["prev_comm"] prev_tid = event["prev_tid"] prev_prio = event["prev_prio"] prev_state = event["prev_state"] next_comm = event["next_comm"] next_tid = event["next_tid"] next_prio = event["next_prio"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { prev_comm = %s, prev_tid = %s, prev_prio = %s, prev_state = %s, next_comm = %s, next_tid = %s, next_prio = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, prev_comm, prev_tid, prev_prio, prev_state, next_comm, next_tid, next_prio,)) def handle_sched_wakeup_new(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] prio = event["prio"] success = event["success"] target_cpu = event["target_cpu"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, prio = %s, success = %s, target_cpu = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, prio, success, target_cpu,)) def handle_sched_wakeup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] prio = event["prio"] success = event["success"] target_cpu = event["target_cpu"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s, prio = %s, success = %s, target_cpu = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid, prio, success, target_cpu,)) def handle_sched_kthread_stop_ret(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] ret = event["ret"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { ret = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, ret,)) def handle_sched_kthread_stop(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] tid = event["tid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, tid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, tid,)) def handle_scsi_eh_wakeup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] host_no = event["host_no"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { host_no = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, host_no,)) def handle_scsi_dispatch_cmd_timeout(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] host_no = event["host_no"] channel = event["channel"] id = event["id"] lun = event["lun"] result = event["result"] opcode = event["opcode"] cmd_len = event["cmd_len"] data_sglen = event["data_sglen"] prot_sglen = event["prot_sglen"] prot_op = event["prot_op"] _cmnd_length = event["_cmnd_length"] cmnd = event["cmnd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { host_no = %s, channel = %s, id = %s, lun = %s, result = %s, opcode = %s, cmd_len = %s, data_sglen = %s, prot_sglen = %s, prot_op = %s, _cmnd_length = %s, cmnd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, host_no, channel, id, lun, result, opcode, cmd_len, data_sglen, prot_sglen, prot_op, _cmnd_length, cmnd,)) def handle_scsi_dispatch_cmd_done(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] host_no = event["host_no"] channel = event["channel"] id = event["id"] lun = event["lun"] result = event["result"] opcode = event["opcode"] cmd_len = event["cmd_len"] data_sglen = event["data_sglen"] prot_sglen = event["prot_sglen"] prot_op = event["prot_op"] _cmnd_length = event["_cmnd_length"] cmnd = event["cmnd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { host_no = %s, channel = %s, id = %s, lun = %s, result = %s, opcode = %s, cmd_len = %s, data_sglen = %s, prot_sglen = %s, prot_op = %s, _cmnd_length = %s, cmnd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, host_no, channel, id, lun, result, opcode, cmd_len, data_sglen, prot_sglen, prot_op, _cmnd_length, cmnd,)) def handle_scsi_dispatch_cmd_error(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] host_no = event["host_no"] channel = event["channel"] id = event["id"] lun = event["lun"] rtn = event["rtn"] opcode = event["opcode"] cmd_len = event["cmd_len"] data_sglen = event["data_sglen"] prot_sglen = event["prot_sglen"] prot_op = event["prot_op"] _cmnd_length = event["_cmnd_length"] cmnd = event["cmnd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { host_no = %s, channel = %s, id = %s, lun = %s, rtn = %s, opcode = %s, cmd_len = %s, data_sglen = %s, prot_sglen = %s, prot_op = %s, _cmnd_length = %s, cmnd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, host_no, channel, id, lun, rtn, opcode, cmd_len, data_sglen, prot_sglen, prot_op, _cmnd_length, cmnd,)) def handle_scsi_dispatch_cmd_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] host_no = event["host_no"] channel = event["channel"] id = event["id"] lun = event["lun"] opcode = event["opcode"] cmd_len = event["cmd_len"] data_sglen = event["data_sglen"] prot_sglen = event["prot_sglen"] prot_op = event["prot_op"] _cmnd_length = event["_cmnd_length"] cmnd = event["cmnd"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { host_no = %s, channel = %s, id = %s, lun = %s, opcode = %s, cmd_len = %s, data_sglen = %s, prot_sglen = %s, prot_op = %s, _cmnd_length = %s, cmnd = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, host_no, channel, id, lun, opcode, cmd_len, data_sglen, prot_sglen, prot_op, _cmnd_length, cmnd,)) def handle_signal_deliver(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] sig = event["sig"] errno = event["errno"] code = event["code"] sa_handler = event["sa_handler"] sa_flags = event["sa_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { sig = %s, errno = %s, code = %s, sa_handler = %s, sa_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, sig, errno, code, sa_handler, sa_flags,)) def handle_signal_generate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] sig = event["sig"] errno = event["errno"] code = event["code"] comm = event["comm"] pid = event["pid"] group = event["group"] result = event["result"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { sig = %s, errno = %s, code = %s, comm = %s, pid = %s, group = %s, result = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, sig, errno, code, comm, pid, group, result,)) def handle_skb_copy_datagram_iovec(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] skbaddr = event["skbaddr"] len = event["len"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { skbaddr = %s, len = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, skbaddr, len,)) def handle_skb_consume(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] skbaddr = event["skbaddr"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { skbaddr = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, skbaddr,)) def handle_skb_kfree(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] skbaddr = event["skbaddr"] location = event["location"] protocol = event["protocol"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { skbaddr = %s, location = %s, protocol = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, skbaddr, location, protocol,)) def handle_sock_exceed_buf_limit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] sysctl_mem = event["sysctl_mem"] allocated = event["allocated"] sysctl_rmem = event["sysctl_rmem"] rmem_alloc = event["rmem_alloc"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, sysctl_mem = %s, allocated = %s, sysctl_rmem = %s, rmem_alloc = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, sysctl_mem, allocated, sysctl_rmem, rmem_alloc,)) def handle_sock_rcvqueue_full(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rmem_alloc = event["rmem_alloc"] truesize = event["truesize"] sk_rcvbuf = event["sk_rcvbuf"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rmem_alloc = %s, truesize = %s, sk_rcvbuf = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rmem_alloc, truesize, sk_rcvbuf,)) def handle_lttng_statedump_interrupt(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] irq = event["irq"] name = event["name"] action = event["action"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { irq = %s, name = %s, action = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, irq, name, action,)) def handle_lttng_statedump_block_device(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] dev = event["dev"] diskname = event["diskname"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { dev = %s, diskname = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, dev, diskname,)) def handle_lttng_statedump_network_interface(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] address_ipv4 = event["address_ipv4"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, address_ipv4 = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, address_ipv4,)) def handle_lttng_statedump_vm_map(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] start = event["start"] end = event["end"] flags = event["flags"] inode = event["inode"] pgoff = event["pgoff"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, start = %s, end = %s, flags = %s, inode = %s, pgoff = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, start, end, flags, inode, pgoff,)) def handle_lttng_statedump_file_descriptor(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pid = event["pid"] fd = event["fd"] flags = event["flags"] fmode = event["fmode"] filename = event["filename"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pid = %s, fd = %s, flags = %s, fmode = %s, filename = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pid, fd, flags, fmode, filename,)) def handle_lttng_statedump_process_state(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] tid = event["tid"] vtid = event["vtid"] pid = event["pid"] vpid = event["vpid"] ppid = event["ppid"] vppid = event["vppid"] name = event["name"] type = event["type"] mode = event["mode"] submode = event["submode"] status = event["status"] ns_level = event["ns_level"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { tid = %s, vtid = %s, pid = %s, vpid = %s, ppid = %s, vppid = %s, name = %s, type = %s, mode = %s, submode = %s, status = %s, ns_level = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, tid, vtid, pid, vpid, ppid, vppid, name, type, mode, submode, status, ns_level,)) def handle_lttng_statedump_end(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_lttng_statedump_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id,)) def handle_rpc_task_wakeup(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clnt = event["clnt"] task = event["task"] timeout = event["timeout"] runstate = event["runstate"] status = event["status"] flags = event["flags"] q_name = event["q_name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clnt = %s, task = %s, timeout = %s, runstate = %s, status = %s, flags = %s, q_name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clnt, task, timeout, runstate, status, flags, q_name,)) def handle_rpc_task_sleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clnt = event["clnt"] task = event["task"] timeout = event["timeout"] runstate = event["runstate"] status = event["status"] flags = event["flags"] q_name = event["q_name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clnt = %s, task = %s, timeout = %s, runstate = %s, status = %s, flags = %s, q_name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clnt, task, timeout, runstate, status, flags, q_name,)) def handle_rpc_task_complete(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clnt = event["clnt"] task = event["task"] action = event["action"] runstate = event["runstate"] status = event["status"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clnt = %s, task = %s, action = %s, runstate = %s, status = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clnt, task, action, runstate, status, flags,)) def handle_rpc_task_run_action(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clnt = event["clnt"] task = event["task"] action = event["action"] runstate = event["runstate"] status = event["status"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clnt = %s, task = %s, action = %s, runstate = %s, status = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clnt, task, action, runstate, status, flags,)) def handle_rpc_task_begin(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] clnt = event["clnt"] task = event["task"] action = event["action"] runstate = event["runstate"] status = event["status"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { clnt = %s, task = %s, action = %s, runstate = %s, status = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, clnt, task, action, runstate, status, flags,)) def handle_rpc_connect_status(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] task = event["task"] clnt = event["clnt"] status = event["status"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { task = %s, clnt = %s, status = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, task, clnt, status,)) def handle_rpc_bind_status(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] task = event["task"] clnt = event["clnt"] status = event["status"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { task = %s, clnt = %s, status = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, task, clnt, status,)) def handle_rpc_call_status(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] task = event["task"] clnt = event["clnt"] status = event["status"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { task = %s, clnt = %s, status = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, task, clnt, status,)) def handle_itimer_expire(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] pid = event["pid"] now = event["now"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, pid = %s, now = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, pid, now,)) def handle_itimer_state(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] which = event["which"] expires = event["expires"] value_sec = event["value_sec"] value_usec = event["value_usec"] interval_sec = event["interval_sec"] interval_usec = event["interval_usec"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { which = %s, expires = %s, value_sec = %s, value_usec = %s, interval_sec = %s, interval_usec = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, which, expires, value_sec, value_usec, interval_sec, interval_usec,)) def handle_hrtimer_cancel(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] hrtimer = event["hrtimer"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { hrtimer = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, hrtimer,)) def handle_hrtimer_expire_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] hrtimer = event["hrtimer"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { hrtimer = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, hrtimer,)) def handle_hrtimer_expire_entry(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] hrtimer = event["hrtimer"] now = event["now"] function = event["function"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { hrtimer = %s, now = %s, function = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, hrtimer, now, function,)) def handle_hrtimer_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] hrtimer = event["hrtimer"] function = event["function"] expires = event["expires"] softexpires = event["softexpires"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { hrtimer = %s, function = %s, expires = %s, softexpires = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, hrtimer, function, expires, softexpires,)) def handle_hrtimer_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] hrtimer = event["hrtimer"] clockid = event["clockid"] mode = event["mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { hrtimer = %s, clockid = %s, mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, hrtimer, clockid, mode,)) def handle_timer_cancel(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer = event["timer"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer,)) def handle_timer_expire_exit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer = event["timer"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer,)) def handle_timer_expire_entry(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer = event["timer"] now = event["now"] function = event["function"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer = %s, now = %s, function = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer, now, function,)) def handle_timer_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer = event["timer"] function = event["function"] expires = event["expires"] now = event["now"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer = %s, function = %s, expires = %s, now = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer, function, expires, now,)) def handle_timer_init(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] timer = event["timer"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { timer = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, timer,)) def handle_udp_fail_queue_rcv_skb(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] rc = event["rc"] lport = event["lport"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { rc = %s, lport = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, rc, lport,)) def handle_mm_vmscan_lru_shrink_inactive(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nid = event["nid"] zid = event["zid"] nr_scanned = event["nr_scanned"] nr_reclaimed = event["nr_reclaimed"] priority = event["priority"] reclaim_flags = event["reclaim_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nid = %s, zid = %s, nr_scanned = %s, nr_reclaimed = %s, priority = %s, reclaim_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nid, zid, nr_scanned, nr_reclaimed, priority, reclaim_flags,)) def handle_mm_vmscan_writepage(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] page = event["page"] reclaim_flags = event["reclaim_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { page = %s, reclaim_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, page, reclaim_flags,)) def handle_mm_vmscan_memcg_isolate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] order = event["order"] nr_requested = event["nr_requested"] nr_scanned = event["nr_scanned"] nr_taken = event["nr_taken"] isolate_mode = event["isolate_mode"] file = event["file"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { order = %s, nr_requested = %s, nr_scanned = %s, nr_taken = %s, isolate_mode = %s, file = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, order, nr_requested, nr_scanned, nr_taken, isolate_mode, file,)) def handle_mm_vmscan_lru_isolate(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] order = event["order"] nr_requested = event["nr_requested"] nr_scanned = event["nr_scanned"] nr_taken = event["nr_taken"] isolate_mode = event["isolate_mode"] file = event["file"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { order = %s, nr_requested = %s, nr_scanned = %s, nr_taken = %s, isolate_mode = %s, file = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, order, nr_requested, nr_scanned, nr_taken, isolate_mode, file,)) def handle_mm_shrink_slab_end(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] shr = event["shr"] shrink = event["shrink"] unused_scan = event["unused_scan"] new_scan = event["new_scan"] retval = event["retval"] total_scan = event["total_scan"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { shr = %s, shrink = %s, unused_scan = %s, new_scan = %s, retval = %s, total_scan = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, shr, shrink, unused_scan, new_scan, retval, total_scan,)) def handle_mm_shrink_slab_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] shr = event["shr"] shrink = event["shrink"] nr_objects_to_shrink = event["nr_objects_to_shrink"] gfp_flags = event["gfp_flags"] pgs_scanned = event["pgs_scanned"] lru_pgs = event["lru_pgs"] cache_items = event["cache_items"] delta = event["delta"] total_scan = event["total_scan"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { shr = %s, shrink = %s, nr_objects_to_shrink = %s, gfp_flags = %s, pgs_scanned = %s, lru_pgs = %s, cache_items = %s, delta = %s, total_scan = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, shr, shrink, nr_objects_to_shrink, gfp_flags, pgs_scanned, lru_pgs, cache_items, delta, total_scan,)) def handle_mm_vmscan_memcg_softlimit_reclaim_end(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_reclaimed = event["nr_reclaimed"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_reclaimed = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_reclaimed,)) def handle_mm_vmscan_memcg_reclaim_end(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_reclaimed = event["nr_reclaimed"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_reclaimed = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_reclaimed,)) def handle_mm_vmscan_direct_reclaim_end(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_reclaimed = event["nr_reclaimed"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_reclaimed = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_reclaimed,)) def handle_mm_vmscan_memcg_softlimit_reclaim_begin(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] order = event["order"] may_writepage = event["may_writepage"] gfp_flags = event["gfp_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { order = %s, may_writepage = %s, gfp_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, order, may_writepage, gfp_flags,)) def handle_mm_vmscan_memcg_reclaim_begin(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] order = event["order"] may_writepage = event["may_writepage"] gfp_flags = event["gfp_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { order = %s, may_writepage = %s, gfp_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, order, may_writepage, gfp_flags,)) def handle_mm_vmscan_direct_reclaim_begin(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] order = event["order"] may_writepage = event["may_writepage"] gfp_flags = event["gfp_flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { order = %s, may_writepage = %s, gfp_flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, order, may_writepage, gfp_flags,)) def handle_mm_vmscan_wakeup_kswapd(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nid = event["nid"] zid = event["zid"] order = event["order"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nid = %s, zid = %s, order = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nid, zid, order,)) def handle_mm_vmscan_kswapd_wake(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nid = event["nid"] order = event["order"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nid = %s, order = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nid, order,)) def handle_mm_vmscan_kswapd_sleep(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nid = event["nid"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nid = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nid,)) def handle_workqueue_execute_end(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] work = event["work"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { work = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, work,)) def handle_workqueue_execute_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] work = event["work"] function = event["function"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { work = %s, function = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, work, function,)) def handle_workqueue_activate_work(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] work = event["work"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { work = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, work,)) def handle_workqueue_queue_work(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] work = event["work"] function = event["function"] req_cpu = event["req_cpu"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { work = %s, function = %s, req_cpu = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, work, function, req_cpu,)) def handle_writeback_single_inode(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] ino = event["ino"] state = event["state"] dirtied_when = event["dirtied_when"] writeback_index = event["writeback_index"] nr_to_write = event["nr_to_write"] wrote = event["wrote"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, ino = %s, state = %s, dirtied_when = %s, writeback_index = %s, nr_to_write = %s, wrote = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, ino, state, dirtied_when, writeback_index, nr_to_write, wrote,)) def handle_writeback_wait_iff_congested(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] usec_timeout = event["usec_timeout"] usec_delayed = event["usec_delayed"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { usec_timeout = %s, usec_delayed = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, usec_timeout, usec_delayed,)) def handle_writeback_congestion_wait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] usec_timeout = event["usec_timeout"] usec_delayed = event["usec_delayed"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { usec_timeout = %s, usec_delayed = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, usec_timeout, usec_delayed,)) def handle_writeback_sb_inodes_requeue(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] ino = event["ino"] state = event["state"] dirtied_when = event["dirtied_when"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, ino = %s, state = %s, dirtied_when = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, ino, state, dirtied_when,)) def handle_writeback_balance_dirty_pages(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] bdi = event["bdi"] limit = event["limit"] setpoint = event["setpoint"] dirty = event["dirty"] bdi_setpoint = event["bdi_setpoint"] bdi_dirty = event["bdi_dirty"] dirty_ratelimit = event["dirty_ratelimit"] task_ratelimit = event["task_ratelimit"] dirtied = event["dirtied"] dirtied_pause = event["dirtied_pause"] paused = event["paused"] pause = event["pause"] period = event["period"] think = event["think"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { bdi = %s, limit = %s, setpoint = %s, dirty = %s, bdi_setpoint = %s, bdi_dirty = %s, dirty_ratelimit = %s, task_ratelimit = %s, dirtied = %s, dirtied_pause = %s, paused = %s, pause = %s, period = %s, think = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, bdi, limit, setpoint, dirty, bdi_setpoint, bdi_dirty, dirty_ratelimit, task_ratelimit, dirtied, dirtied_pause, paused, pause, period, think,)) def handle_writeback_bdi_dirty_ratelimit(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] bdi = event["bdi"] write_bw = event["write_bw"] avg_write_bw = event["avg_write_bw"] dirty_rate = event["dirty_rate"] dirty_ratelimit = event["dirty_ratelimit"] task_ratelimit = event["task_ratelimit"] balanced_dirty_ratelimit = event["balanced_dirty_ratelimit"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { bdi = %s, write_bw = %s, avg_write_bw = %s, dirty_rate = %s, dirty_ratelimit = %s, task_ratelimit = %s, balanced_dirty_ratelimit = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, bdi, write_bw, avg_write_bw, dirty_rate, dirty_ratelimit, task_ratelimit, balanced_dirty_ratelimit,)) def handle_writeback_global_dirty_state(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] nr_dirty = event["nr_dirty"] nr_writeback = event["nr_writeback"] nr_unstable = event["nr_unstable"] background_thresh = event["background_thresh"] dirty_thresh = event["dirty_thresh"] dirty_limit = event["dirty_limit"] nr_dirtied = event["nr_dirtied"] nr_written = event["nr_written"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { nr_dirty = %s, nr_writeback = %s, nr_unstable = %s, background_thresh = %s, dirty_thresh = %s, dirty_limit = %s, nr_dirtied = %s, nr_written = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, nr_dirty, nr_writeback, nr_unstable, background_thresh, dirty_thresh, dirty_limit, nr_dirtied, nr_written,)) def handle_writeback_queue_io(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] moved = event["moved"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, moved = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, moved,)) def handle_writeback_wbc_writepage(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] nr_to_write = event["nr_to_write"] pages_skipped = event["pages_skipped"] sync_mode = event["sync_mode"] for_kupdate = event["for_kupdate"] for_background = event["for_background"] for_reclaim = event["for_reclaim"] range_cyclic = event["range_cyclic"] range_start = event["range_start"] range_end = event["range_end"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, nr_to_write = %s, pages_skipped = %s, sync_mode = %s, for_kupdate = %s, for_background = %s, for_reclaim = %s, range_cyclic = %s, range_start = %s, range_end = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, nr_to_write, pages_skipped, sync_mode, for_kupdate, for_background, for_reclaim, range_cyclic, range_start, range_end,)) def handle_writeback_thread_stop(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_thread_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_bdi_unregister(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_bdi_register(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_wake_forker_thread(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_wake_thread(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_wake_background(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_nowork(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_pages_written(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] pages = event["pages"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { pages = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, pages,)) def handle_writeback_wait(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_written(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_exec(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_queue(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_nothread(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name,)) def handle_writeback_write_inode(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] ino = event["ino"] sync_mode = event["sync_mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, ino = %s, sync_mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, ino, sync_mode,)) def handle_writeback_write_inode_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] ino = event["ino"] sync_mode = event["sync_mode"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, ino = %s, sync_mode = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, ino, sync_mode,)) def handle_writeback_dirty_inode(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] ino = event["ino"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, ino = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, ino, flags,)) def handle_writeback_dirty_inode_start(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] ino = event["ino"] flags = event["flags"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, ino = %s, flags = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, ino, flags,)) def handle_writeback_dirty_page(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] ino = event["ino"] index = event["index"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, ino = %s, index = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, ino, index,)) def handle_net_latency(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] name = event["name"] delay = event["delay"] flag = event["flag"] out_id = event["out_id"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { name = %s, delay = %s, flag = %s, out_id = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, name, delay, flag, out_id,)) def handle_block_latency(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] major = event["major"] minor = event["minor"] sector = event["sector"] delay = event["delay"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { major = %s, minor = %s, sector = %s, delay = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, major, minor, sector, delay,)) def handle_offcpu_latency(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] pid = event["pid"] delay = event["delay"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, pid = %s, delay = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, pid, delay, flag,)) def handle_wakeup_latency(self, event): timestamp = event.timestamp cpu_id = event["cpu_id"] comm = event["comm"] pid = event["pid"] delay = event["delay"] flag = event["flag"] self.print_filter(event, "[%s] %s: { cpu_id = %s }, { comm = %s, pid = %s, delay = %s, flag = %s }" % (self.ns_to_hour_nsec(timestamp), event.name, cpu_id, comm, pid, delay, flag,)) # end of generated code if __name__ == "__main__": parser = argparse.ArgumentParser(description='Track a process throughout a LTTng trace') parser.add_argument('path', metavar="", help='Trace path') parser.add_argument('--procname', '-n', type=str, default=0, help='Filter the results only for this list of ' 'process names') parser.add_argument('--tid', '-t', type=str, default=0, help='Filter the results only for this list ' 'of TIDs') parser.add_argument('--follow-child', '-f', action="store_true", help='Follow children on fork') args = parser.parse_args() arg_proc_list = None if args.procname: arg_proc_list = args.procname.split(",") arg_tid_list = None if args.tid: arg_tid_list = [] for i in args.tid.split(","): arg_tid_list.append(int(i)) traces = TraceCollection() handle = traces.add_traces_recursive(args.path, "ctf") if handle is None: sys.exit(1) t = TraceParser(traces, arg_proc_list, arg_tid_list, args.follow_child) t.parse() for h in handle.values(): traces.remove_trace(h) lttnganalyses-0.6.1/lttng-irqstats0000775000175000017500000000235112553274232021012 0ustar mjeansonmjeanson00000000000000#!/usr/bin/env python3 # # The MIT License (MIT) # # Copyright (C) 2015 - Julien Desfossez # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to deal # in the Software without restriction, including without limitation the rights # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell # copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # SOFTWARE. from lttnganalyses.cli import irq if __name__ == '__main__': irq.runstats() lttnganalyses-0.6.1/test-requirements.txt0000664000175000017500000000005612746731246022334 0ustar mjeansonmjeanson00000000000000pytest pytest-cov flake8>=2.5.0 coverage>=4.1 lttnganalyses-0.6.1/MANIFEST.in0000664000175000017500000000117313033736352017623 0ustar mjeansonmjeanson00000000000000include versioneer.py recursive-include tests * include ChangeLog include LICENSE include mit-license.txt include requirements.txt include test-requirements.txt include tox.ini include lttng-cputop include lttng-iolatencyfreq include lttng-iolatencystats include lttng-iolatencytop include lttng-iolog include lttng-iousagetop include lttng-irqfreq include lttng-irqlog include lttng-irqstats include lttng-memtop include lttng-periodfreq include lttng-periodlog include lttng-periodstats include lttng-periodtop include lttng-schedfreq include lttng-schedlog include lttng-schedstats include lttng-schedtop include lttng-syscallstats