pax_global_header00006660000000000000000000000064132166401700014512gustar00rootroot0000000000000052 comment=258ac3f39a14d52d73868d24770b5b6f5786eca2 kpatch-0.5.0/000077500000000000000000000000001321664017000127665ustar00rootroot00000000000000kpatch-0.5.0/.gitignore000066400000000000000000000003571321664017000147630ustar00rootroot00000000000000*.o *.o.cmd *.o.d *.ko *.ko.cmd *.mod.c *.swp *.d *.so .tmp_versions Module.symvers kpatch-build/lookup kpatch-build/create-diff-object kpatch-build/create-klp-module kpatch-build/create-kpatch-module man/kpatch.1.gz man/kpatch-build.1.gz kpatch-0.5.0/COPYING000066400000000000000000000432541321664017000140310ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. kpatch-0.5.0/Makefile000066400000000000000000000013161321664017000144270ustar00rootroot00000000000000include Makefile.inc SUBDIRS = kpatch-build kpatch kmod man contrib BUILD_DIRS = $(SUBDIRS:%=build-%) INSTALL_DIRS = $(SUBDIRS:%=install-%) UNINSTALL_DIRS = $(SUBDIRS:%=uninstall-%) CLEAN_DIRS = $(SUBDIRS:%=clean-%) .PHONY: all install uninstall clean check .PHONY: $(SUBDIRS) $(BUILD_DIRS) $(INSTALL_DIRS) $(CLEAN_DIRS) all: $(BUILD_DIRS) $(BUILD_DIRS): $(MAKE) -C $(@:build-%=%) install: $(INSTALL_DIRS) $(INSTALL_DIRS): $(MAKE) -C $(@:install-%=%) install uninstall: $(UNINSTALL_DIRS) $(UNINSTALL_DIRS): $(MAKE) -C $(@:uninstall-%=%) uninstall clean: $(CLEAN_DIRS) $(CLEAN_DIRS): $(MAKE) -C $(@:clean-%=%) clean check: shellcheck kpatch/kpatch kpatch-build/kpatch-build kpatch-build/kpatch-gcc kpatch-0.5.0/Makefile.inc000066400000000000000000000011361321664017000151770ustar00rootroot00000000000000SHELL = /bin/sh CC = gcc INSTALL = /usr/bin/install ARCH = $(shell uname -m) PREFIX ?= /usr/local LIBDIR ?= lib LIBEXEC ?= libexec BINDIR = $(DESTDIR)$(PREFIX)/bin SBINDIR = $(DESTDIR)$(PREFIX)/sbin MODULESDIR = $(DESTDIR)$(PREFIX)/$(LIBDIR)/kpatch LIBEXECDIR = $(DESTDIR)$(PREFIX)/$(LIBEXEC)/kpatch DATADIR = $(DESTDIR)$(PREFIX)/share/kpatch MANDIR = $(DESTDIR)$(PREFIX)/share/man/man1 SYSTEMDDIR = $(DESTDIR)$(PREFIX)/lib/systemd/system # The core module is only supported on x86_64 ifeq ($(ARCH),x86_64) BUILDMOD ?= yes endif .PHONY: all install clean .DEFAULT: all kpatch-0.5.0/README.md000066400000000000000000000562671321664017000142650ustar00rootroot00000000000000kpatch: dynamic kernel patching =============================== kpatch is a Linux dynamic kernel patching infrastructure which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, for users to log off, or for scheduled reboot windows. It gives more control over uptime without sacrificing security or stability. **WARNING: Use with caution! Kernel crashes, spontaneous reboots, and data loss may occur!** Here's a video of kpatch in action: [![kpatch video](https://img.youtube.com/vi/juyQ5TsJRTA/0.jpg)](https://www.youtube.com/watch?v=juyQ5TsJRTA) And a few more: - https://www.youtube.com/watch?v=rN0sFjrJQfU - https://www.youtube.com/watch?v=Mftc80KyjA4 Installation ------------ ### Prerequisites #### Fedora *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash UNAME=$(uname -r) sudo dnf install gcc kernel-devel-${UNAME%.*} elfutils elfutils-devel ``` Install the dependencies for the "kpatch-build" command: ```bash sudo dnf install rpmdevtools pesign yum-utils openssl wget numactl-devel sudo dnf builddep kernel-${UNAME%.*} sudo dnf debuginfo-install kernel-${UNAME%.*} # optional, but highly recommended sudo dnf install ccache ccache --max-size=5G # optional, for kpatch-test sudo dnf install patchutils ``` #### RHEL 7 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash UNAME=$(uname -r) sudo yum install gcc kernel-devel-${UNAME%.*} elfutils elfutils-devel ``` Install the dependencies for the "kpatch-build" command: ```bash sudo yum-config-manager --enable rhel-7-server-optional-rpms sudo yum install rpmdevtools pesign yum-utils zlib-devel \ binutils-devel newt-devel python-devel perl-ExtUtils-Embed \ audit-libs-devel numactl-devel pciutils-devel bison ncurses-devel sudo yum-builddep kernel-${UNAME%.*} sudo debuginfo-install kernel-${UNAME%.*} # optional, but highly recommended sudo yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/c/ccache-3.2.7-3.el7.x86_64.rpm ccache --max-size=5G # optional, for kpatch-test sudo dnf install patchutils ``` #### CentOS 7 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash UNAME=$(uname -r) sudo yum install gcc kernel-devel-${UNAME%.*} elfutils elfutils-devel ``` Install the dependencies for the "kpatch-build" command: ```bash sudo yum install rpmdevtools pesign yum-utils zlib-devel \ binutils-devel newt-devel python-devel perl-ExtUtils-Embed \ audit-libs audit-libs-devel numactl-devel pciutils-devel bison # enable CentOS 7 debug repo sudo yum-config-manager --enable debug sudo yum-builddep kernel-${UNAME%.*} sudo debuginfo-install kernel-${UNAME%.*} # optional, but highly recommended - enable EPEL 7 sudo yum install ccache ccache --max-size=5G # optional, for kpatch-test sudo dnf install patchutils ``` #### Oracle Linux 7 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash UNAME=$(uname -r) sudo yum install gcc kernel-devel-${UNAME%.*} elfutils elfutils-devel ``` Install the dependencies for the "kpatch-build" command: ```bash sudo yum install rpmdevtools pesign yum-utils zlib-devel \ binutils-devel newt-devel python-devel perl-ExtUtils-Embed \ audit-libs numactl-devel pciutils-devel bison # enable ol7_optional_latest repo sudo yum-config-manager --enable ol7_optional_latest sudo yum-builddep kernel-${UNAME%.*} # manually install kernel debuginfo packages rpm -ivh https://oss.oracle.com/ol7/debuginfo/kernel-debuginfo-$(uname -r).rpm rpm -ivh https://oss.oracle.com/ol7/debuginfo/kernel-debuginfo-common-x86_64-$(uname -r).rpm # optional, but highly recommended - enable EPEL 7 sudo yum install ccache ccache --max-size=5G # optional, for kpatch-test sudo dnf install patchutils ``` #### Ubuntu 14.04 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash apt-get install make gcc libelf-dev ``` Install the dependencies for the "kpatch-build" command: ```bash apt-get install dpkg-dev devscripts apt-get build-dep linux # optional, but highly recommended apt-get install ccache ccache --max-size=5G ``` Install kernel debug symbols: ```bash # Add ddebs repository codename=$(lsb_release -sc) sudo tee /etc/apt/sources.list.d/ddebs.list << EOF deb http://ddebs.ubuntu.com/ ${codename} main restricted universe multiverse deb http://ddebs.ubuntu.com/ ${codename}-security main restricted universe multiverse deb http://ddebs.ubuntu.com/ ${codename}-updates main restricted universe multiverse deb http://ddebs.ubuntu.com/ ${codename}-proposed main restricted universe multiverse EOF # add APT key wget -Nq http://ddebs.ubuntu.com/dbgsym-release-key.asc -O- | sudo apt-key add - apt-get update && apt-get install linux-image-$(uname -r)-dbgsym ``` If there are no packages published yet to the codename-security pocket, the apt update may report a "404 Not Found" error, as well as a complaint about disabling the repository by default. This message may be ignored (see issue #710). #### Debian 9 (Stretch) Since Stretch the stock kernel can be used without changes, however the version of kpatch in Stretch is too old so you still need to build it manually. Follow the instructions for Debian Jessie (next section) but skip building a custom kernel/rebooting. #### Debian 8 (Jessie) *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: apt-get install make gcc libelf-dev build-essential Install and prepare the kernel sources: ```bash apt-get install linux-source-$(uname -r) cd /usr/src && tar xvf linux-source-$(uname -r).tar.xz && ln -s linux-source-$(uname -r) linux && cd linux cp /boot/config-$(uname -r) .config for OPTION in CONFIG_KALLSYMS_ALL CONFIG_FUNCTION_TRACER ; do sed -i "s/# $OPTION is not set/$OPTION=y/g" .config ; done sed -i "s/^SUBLEVEL.*/SUBLEVEL =/" Makefile make -j`getconf _NPROCESSORS_CONF` deb-pkg KDEB_PKGVERSION=$(uname -r).9-1 ``` Install the kernel packages and reboot dpkg -i /usr/src/*.deb reboot Install the dependencies for the "kpatch-build" command: apt-get install dpkg-dev apt-get build-dep linux # optional, but highly recommended apt-get install ccache ccache --max-size=5G #### Debian 7 (Lenny) *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Add backports repositories: ```bash echo "deb http://http.debian.net/debian wheezy-backports main" > /etc/apt/sources.list.d/wheezy-backports.list echo "deb http://packages.incloudus.com backports-incloudus main" > /etc/apt/sources.list.d/incloudus.list wget http://packages.incloudus.com/incloudus/incloudus.pub -O- | apt-key add - aptitude update ``` Install the linux kernel, symbols and gcc 4.9: aptitude install -t wheezy-backports -y initramfs-tools aptitude install -y gcc gcc-4.9 g++-4.9 linux-image-3.14 linux-image-3.14-dbg Configure gcc 4.9 as the default gcc compiler: update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.7 20 update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 50 update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.7 20 update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.9 50 Install kpatch and these dependencies: aptitude install kpatch Configure ccache (installed by kpatch package): ccache --max-size=5G #### Gentoo *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install Kpatch and Kpatch dependencies: ```bash emerge --ask sys-kernel/kpatch ``` Install ccache (optional): ```bash emerge --ask dev-util/ccache ``` Configure ccache: ```bash ccache --max-size=5G ``` ### Build Compile kpatch: make ### Install OPTIONAL: Install kpatch to `/usr/local`: sudo make install Alternatively, the kpatch and kpatch-build scripts can be run directly from the git tree. Quick start ----------- > NOTE: While kpatch is designed to work with any recent Linux kernel on any distribution, the `kpatch-build` command has **ONLY** been tested and confirmed to work on Fedora 20 and later, RHEL 7, Oracle Linux 7, CentOS 7 and Ubuntu 14.04. First, make a source code patch against the kernel tree using diff, git, or quilt. As a contrived example, let's patch /proc/meminfo to show VmallocChunk in ALL CAPS so we can see it better: $ cat meminfo-string.patch Index: src/fs/proc/meminfo.c =================================================================== --- src.orig/fs/proc/meminfo.c +++ src/fs/proc/meminfo.c @@ -95,7 +95,7 @@ static int meminfo_proc_show(struct seq_ "Committed_AS: %8lu kB\n" "VmallocTotal: %8lu kB\n" "VmallocUsed: %8lu kB\n" - "VmallocChunk: %8lu kB\n" + "VMALLOCCHUNK: %8lu kB\n" #ifdef CONFIG_MEMORY_FAILURE "HardwareCorrupted: %5lu kB\n" #endif Build the patch module: $ kpatch-build -t vmlinux meminfo-string.patch Using cache at /home/jpoimboe/.kpatch/3.13.10-200.fc20.x86_64/src Testing patch file checking file fs/proc/meminfo.c Building original kernel Building patched kernel Detecting changed objects Rebuilding changed objects Extracting new and modified ELF sections meminfo.o: changed function: meminfo_proc_show Building patch module: kpatch-meminfo-string.ko SUCCESS > NOTE: The `-t vmlinux` option is used to tell `kpatch-build` to only look for > changes in the `vmlinux` base kernel image, which is much faster than also > compiling all the kernel modules. If your patch affects a kernel module, you > can either omit this option to build everything, and have `kpatch-build` > detect which modules changed, or you can specify the affected kernel build > targets with multiple `-t` options. That outputs a patch module named `kpatch-meminfo-string.ko` in the current directory. Now apply it to the running kernel: $ sudo kpatch load kpatch-meminfo-string.ko loading core module: /usr/local/lib/modules/3.13.10-200.fc20.x86_64/kpatch/kpatch.ko loading patch module: kpatch-meminfo-string.ko Done! The kernel is now patched. $ grep -i chunk /proc/meminfo VMALLOCCHUNK: 34359337092 kB Patch Author Guide ------------------ Unfortunately, live patching isn't always as easy as the previous example, and can have some major pitfalls if you're not careful. To learn more about how to properly create live patches, see the [Patch Author Guide](doc/patch-author-guide.md). How it works ------------ kpatch works at a function granularity: old functions are replaced with new ones. It has four main components: - **kpatch-build**: a collection of tools which convert a source diff patch to a patch module. They work by compiling the kernel both with and without the source patch, comparing the binaries, and generating a patch module which includes new binary versions of the functions to be replaced. - **patch module**: a kernel module (.ko file) which includes the replacement functions and metadata about the original functions. - **kpatch core module**: a kernel module (.ko file) which provides an interface for the patch modules to register new functions for replacement. It uses the kernel ftrace subsystem to hook into the original function's mcount call instruction, so that a call to the original function is redirected to the replacement function. - **kpatch utility:** a command-line tool which allows a user to manage a collection of patch modules. One or more patch modules may be configured to load at boot time, so that a system can remain patched even after a reboot into the same version of the kernel. ### kpatch-build The "kpatch-build" command converts a source-level diff patch file to a kernel patch module. Most of its work is performed by the kpatch-build script which uses a utility named `create-diff-object` to compare changed objects. The primary steps in kpatch-build are: - Build the unstripped vmlinux for the kernel - Patch the source tree - Rebuild vmlinux and monitor which objects are being rebuilt. These are the "changed objects". - Recompile each changed object with `-ffunction-sections -fdata-sections`, resulting in the changed patched objects - Unpatch the source tree - Recompile each changed object with `-ffunction-sections -fdata-sections`, resulting in the changed original objects - For every changed object, use `create-diff-object` to do the following: * Analyze each original/patched object pair for patchability * Add `.kpatch.funcs` and `.rela.kpatch.funcs` sections to the output object. The kpatch core module uses this to determine the list of functions that need to be redirected using ftrace. * Add `.kpatch.dynrelas` and `.rela.kpatch.dynrelas` sections to the output object. This will be used to resolve references to non-included local and non-exported global symbols. These relocations will be resolved by the kpatch core module. * Generate the resulting output object containing the new and modified sections - Link all the output objects into a cumulative object - Generate the patch module ### Patching The patch modules register with the core module (`kpatch.ko`). They provide information about original functions that need to be replaced, and corresponding function pointers to the replacement functions. The core module registers a handler function with ftrace. The handler function is called by ftrace immediately before the original function begins executing. This occurs with the help of the reserved mcount call at the beginning of every function, created by the gcc `-mfentry` flag. The ftrace handler then modifies the return instruction pointer (IP) address on the stack and returns to ftrace, which then restores the original function's arguments and stack, and "returns" to the new function. Limitations ----------- - Patches which modify init functions (annotated with `__init`) are not supported. kpatch-build will return an error if the patch attempts to do so. - Patches which modify statically allocated data are not supported. kpatch-build will detect that and return an error. (In the future we will add a facility to support it. It will probably require the user to write code which runs at patch module loading time which manually updates the data.) - Patches which change the way a function interacts with dynamically allocated data might be safe, or might not. It isn't possible for kpatch-build to verify the safety of this kind of patch. It's up to the user to understand what the patch does, whether the new functions interact with dynamically allocated data in a different way than the old functions did, and whether it would be safe to atomically apply such a patch to a running kernel. - Patches which modify functions in vdso are not supported. These run in user-space and ftrace can't hook them. - Patches which modify functions that are missing a `fentry` call are not supported. This includes any `lib-y` targets that are archived into a `lib.a` library for later linking (for example, `lib/string.o`). - Some incompatibilities currently exist between kpatch and usage of ftrace and kprobes. See the Frequently Asked Questions section for more details. Frequently Asked Questions -------------------------- **Q. What's the relationship between kpatch and the upstream Linux live kernel patching component (livepatch)?** Starting with Linux 4.0, the Linux kernel has livepatch, which is a new converged live kernel patching framework. Livepatch is similar in functionality to the kpatch core module, though it doesn't yet have all the features that kpatch does. kpatch-build already works with both livepatch and kpatch. If your kernel has CONFIG\_LIVEPATCH enabled, it detects that and builds a patch module in the livepatch format. Otherwise it builds a kpatch patch module. The kpatch script also supports both patch module formats. **Q. Isn't this just a virus/rootkit injection framework?** kpatch uses kernel modules to replace code. It requires the `CAP_SYS_MODULE` capability. If you already have that capability, then you already have the ability to arbitrarily modify the kernel, with or without kpatch. **Q. How can I detect if somebody has patched the kernel?** When a patch module is loaded, the `TAINT_USER` or `TAINT_LIVEPATCH` flag is set. (The latter flag was introduced in Linux version 4.0.) To test for these flags, `cat /proc/sys/kernel/tainted` and check to see if the value of `TAINT_USER` (64) or `TAINT_LIVEPATCH` (32768) has been OR'ed in. Note that the `TAINT_OOT_MODULE` flag (4096) will also be set, since the patch module is built outside the Linux kernel source tree. If your patch module is unsigned, the `TAINT_FORCED_MODULE` flag (2) will also be set. Starting with Linux 3.15, this will be changed to the more specific `TAINT_UNSIGNED_MODULE` (8192). Linux versions starting with 4.9 also support a per-module `TAINT_LIVEPATCH` taint flag. This can be checked by verifying the output of `cat /sys/module//taint` -- a 'K' character indicates the presence of `TAINT_LIVEPATCH`. **Q. Will it destabilize my system?** No, as long as the patch is chosen carefully. See the Limitations section above. **Q. Why does kpatch use ftrace to jump to the replacement function instead of adding the jump directly?** ftrace owns the first "call mcount" instruction of every kernel function. In order to keep compatibility with ftrace, we go through ftrace rather than updating the instruction directly. This approach also ensures that the code modification path is reliable, since ftrace has been doing it successfully for years. **Q. Is kpatch compatible with \?** We aim to be good kernel citizens and maintain compatibility. A kpatch replacement function is no different than a function loaded by any other kernel module. Each replacement function has its own symbol name and kallsyms entry, so it looks like a normal function to the kernel. - **oops stack traces**: Yes. If the replacement function is involved in an oops, the stack trace will show the function and kernel module name of the replacement function, just like any other kernel module function. The oops message will also show the taint flag (see the FAQ "How can I detect if somebody has patched the kernel" for specifics). - **kdump/crash**: Yes. Replacement functions are normal functions, so crash will have no issues. - **ftrace**: Yes, but certain uses of ftrace which involve opening the `/sys/kernel/debug/tracing/trace` file or using `trace-cmd record` can result in a tiny window of time where a patch gets temporarily disabled. Therefore it's a good idea to avoid using ftrace on a patched system until this issue is resolved. - **systemtap/kprobes**: Some incompatibilities exist. - If you setup a kprobe module at the beginning of a function before loading a kpatch module, and they both affect the same function, kprobes "wins" until the kprobe has been unregistered. This is tracked in issue [#47](https://github.com/dynup/kpatch/issues/47). - Setting a kretprobe before loading a kpatch module could be unsafe. See issue [#67](https://github.com/dynup/kpatch/issues/67). - **perf**: Yes. - **tracepoints**: Patches to a function which uses tracepoints will result in the tracepoints being effectively disabled as long as the patch is applied. **Q. Why not use something like kexec instead?** If you want to avoid a hardware reboot, but are ok with restarting processes, kexec is a good alternative. **Q. If an application can't handle a reboot, it's designed wrong.** That's a good poi... [system reboots] **Q. What changes are needed in other upstream projects?** We hope to make the following changes to other projects: - kernel: - ftrace improvements to close any windows that would allow a patch to be inadvertently disabled **Q. Is it possible to register a function that gets called atomically with `stop_machine` when the patch module loads and unloads?** We do have plans to implement something like that. **Q. What kernels are supported?** kpatch needs gcc >= 4.8 and Linux >= 3.9. **Q. Is it possible to remove a patch?** Yes. Just run `kpatch unload` which will disable and unload the patch module and restore the function to its original state. **Q. Can you apply multiple patches?** Yes, but to prevent any unexpected interactions between multiple patch modules, it's recommended that patch upgrades are cumulative, so that each patch is a superset of the previous patch. This can be achieved by combining the new patch with the previous patch using `combinediff` before running `kpatch-build`. **Q. Why did kpatch-build detect a changed function that wasn't touched by the source patch?** There could be a variety of reasons for this, such as: - The patch changed an inline function. - The compiler decided to inline a changed function, resulting in the outer function getting recompiled. This is common in the case where the inner function is static and is only called once. **Q. How do I patch a function which is always on the stack of at least one task, such as schedule(), sys_poll(), sys_select(), sys_read(), sys_nanosleep(), etc?** - If you're sure it would be safe for the old function and the new function to run simultaneously, use the `KPATCH_FORCE_UNSAFE` macro to skip the activeness safety check for the function. See `kmod/patch/kpatch-macros.h` for more details. **Q. Are patching of kernel modules supported?** - Yes. **Q. Can you patch out-of-tree modules?** - Yes, though it's currently a bit of a manual process. See this [message](https://www.redhat.com/archives/kpatch/2015-June/msg00004.html) on the kpatch mailing list for more information. Get involved ------------ If you have questions or feedback, join the #kpatch IRC channel on freenode and say hi. We also have a [mailing list](https://www.redhat.com/mailman/listinfo/kpatch). Contributions are very welcome. Feel free to open issues or PRs on github. For big PRs, it's a good idea to discuss them first in github issues or on the [mailing list](https://www.redhat.com/mailman/listinfo/kpatch) before you write a lot of code. License ------- kpatch is under the GPLv2 license. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. kpatch-0.5.0/contrib/000077500000000000000000000000001321664017000144265ustar00rootroot00000000000000kpatch-0.5.0/contrib/Makefile000066400000000000000000000003551321664017000160710ustar00rootroot00000000000000include ../Makefile.inc all: install: all $(INSTALL) -d $(SYSTEMDDIR) $(INSTALL) -m 0644 kpatch.service $(SYSTEMDDIR) sed -i 's~PREFIX~$(PREFIX)~' $(SYSTEMDDIR)/kpatch.service uninstall: $(RM) $(SYSTEMDDIR)/kpatch.service clean: kpatch-0.5.0/contrib/kpatch.service000066400000000000000000000004001321664017000172540ustar00rootroot00000000000000[Unit] Description="Apply kpatch kernel patches" ConditionKernelCommandLine=!kpatch.enable=0 [Service] Type=oneshot RemainAfterExit=yes ExecStart=PREFIX/sbin/kpatch load --all ExecStop=PREFIX/sbin/kpatch unload --all [Install] WantedBy=multi-user.target kpatch-0.5.0/contrib/kpatch.spec000066400000000000000000000070141321664017000165560ustar00rootroot00000000000000Name: kpatch Summary: Dynamic kernel patching Version: 0.4.0 License: GPLv2 Group: System Environment/Kernel URL: http://github.com/dynup/kpatch Release: 1%{?dist} Source0: %{name}-%{version}.tar.gz Requires: kmod bash BuildRequires: gcc kernel-devel elfutils elfutils-devel BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) # needed for the kernel specific module %define KVER %(uname -r) %description kpatch is a Linux dynamic kernel patching tool which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or for scheduled reboot windows. It gives more control over up-time without sacrificing security or stability. %package runtime Summary: Dynamic kernel patching Buildarch: noarch Provides: %{name} = %{version} %description runtime kpatch is a Linux dynamic kernel patching tool which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or for scheduled reboot windows. It gives more control over up-time without sacrificing security or stability. %package build Requires: %{name} Summary: Dynamic kernel patching %description build kpatch is a Linux dynamic kernel patching tool which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or for scheduled reboot windows. It gives more control over up-time without sacrificing security or stability. %package %{KVER} Requires: %{name} Summary: Dynamic kernel patching %description %{KVER} kpatch is a Linux dynamic kernel patching tool which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or for scheduled reboot windows. It gives more control over up-time without sacrificing security or stability. %prep %setup -q %build make %{_smp_mflags} %install rm -rf %{buildroot} make install PREFIX=/%{_usr} DESTDIR=%{buildroot} %clean rm -rf %{buildroot} %files runtime %defattr(-,root,root,-) %doc COPYING README.md %{_sbindir}/kpatch %{_mandir}/man1/kpatch.1* %{_usr}/lib/systemd/system/* %files %{KVER} %defattr(-,root,root,-) %{_usr}/lib/kpatch/%{KVER} %files build %defattr(-,root,root,-) %{_bindir}/* %{_libexecdir}/* %{_datadir}/%{name} %{_mandir}/man1/kpatch-build.1* %changelog * Wed Dec 3 2014 Josh Poimboeuf - 0.2.2-1 - rebased to current version * Tue Sep 2 2014 Josh Poimboeuf - 0.2.1-1 - rebased to current version * Mon Jul 28 2014 Josh Poimboeuf - 0.1.9-1 - moved core module to /usr/lib/kpatch - rebased to current version * Mon Jul 07 2014 Udo Seidel - 0.1.7-1 - rebased to current version * Sat May 24 2014 Udo Seidel - 0.1.1-1 - rebased to current version * Thu Apr 10 2014 Udo Seidel - 0.0.1-3 - added dracut module * Tue Mar 25 2014 Udo Seidel - 0.0.1-2 - added man pages * Sat Mar 22 2014 Udo Seidel - 0.0.1-1 - initial release kpatch-0.5.0/doc/000077500000000000000000000000001321664017000135335ustar00rootroot00000000000000kpatch-0.5.0/doc/patch-author-guide.md000066400000000000000000000457741321664017000175700ustar00rootroot00000000000000kpatch Patch Author Guide ========================= Because kpatch-build is relatively easy to use, it can be easy to assume that a successful patch module build means that the patch is safe to apply. But in fact that's a very dangerous assumption. There are many pitfalls that can be encountered when creating a live patch. This document attempts to guide the patch creation process. It's a work in progress. If you find it useful, please contribute! Patch Analysis -------------- kpatch provides _some_ guarantees, but it does not guarantee that all patches are safe to apply. Every patch must also be analyzed in-depth by a human. The most important point here cannot be stressed enough. Here comes the bold: **Do not blindly apply patches. There is no substitute for human analysis and reasoning on a per-patch basis. All patches must be thoroughly analyzed by a human kernel expert who completely understands the patch and the affected code and how they relate to the live patching environment.** kpatch vs livepatch vs kGraft ----------------------------- This document assumes that the kpatch core module is being used. Other live patching systems (e.g., livepatch and kGraft) have different consistency models. Each comes with its own guarantees, and there are some subtle differences. The guidance in this document applies **only** to kpatch. Patch upgrades -------------- Due to potential unexpected interactions between patches, it's highly recommended that when patching a system which has already been patched, the second patch should be a cumulative upgrade which is a superset of the first patch. Data structure changes ---------------------- kpatch patches functions, not data. If the original patch involves a change to a data structure, the patch will require some rework, as changes to data structures are not allowed by default. Usually you have to get creative. There are several possible ways to handle this: ### Change the code which uses the data structure Sometimes, instead of changing the data structure itself, you can change the code which uses it. For example, consider this [patch](http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=54a20552e1eae07aa240fa370a0293e006b5faed). which has the following hunk: ``` @@ -3270,6 +3277,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_EXCP_BASE + PF_VECTOR] = pf_interception, [SVM_EXIT_EXCP_BASE + NM_VECTOR] = nm_interception, [SVM_EXIT_EXCP_BASE + MC_VECTOR] = mc_interception, + [SVM_EXIT_EXCP_BASE + AC_VECTOR] = ac_interception, [SVM_EXIT_INTR] = intr_interception, [SVM_EXIT_NMI] = nmi_interception, [SVM_EXIT_SMI] = nop_on_interception, ``` `svm_exit_handlers[]` is an array of function pointers. The patch adds a `ac_interception` function pointer to the array at index `[SVM_EXIT_EXCP_BASE + AC_VECTOR]`. That change is incompatible with kpatch. Looking at the source file, we can see that this function pointer is only accessed by a single function, `handle_exit()`: ``` if (exit_code >= ARRAY_SIZE(svm_exit_handlers) || !svm_exit_handlers[exit_code]) { WARN_ONCE(1, "svm: unexpected exit reason 0x%x\n", exit_code); kvm_queue_exception(vcpu, UD_VECTOR); return 1; } return svm_exit_handlers[exit_code](svm); ``` So an easy solution here is to just change the code to manually check for the new case before looking in the data structure: ``` @@ -3580,6 +3580,9 @@ static int handle_exit(struct kvm_vcpu *vcpu) return 1; } + if (exit_code == SVM_EXIT_EXCP_BASE + AC_VECTOR) + return ac_interception(svm); + return svm_exit_handlers[exit_code](svm); } ``` Not only is this an easy solution, it's also safer than touching data since kpatch creates a barrier between the calling of old functions and new functions. ### Use a kpatch load hook If you need to change the contents of an existing variable in-place, you can use the `KPATCH_LOAD_HOOK` macro to specify a function to be called when the patch module is loaded. `kpatch-macros.h` provides `KPATCH_LOAD_HOOK` and `KPATCH_UNLOAD_HOOK` macros to define such functions. The signature of both hook functions is `void foo(void)`. Their execution context is as follows: * For patches to vmlinux or already loaded kernel modules, hook functions will be run by `stop_machine` as part of applying or removing a patch. (Therefore the hooks must not block or sleep.) * For patches to kernel modules which haven't been loaded yet, a module-notifier will execute load hooks when the associated module is loaded into the `MODULE_STATE_COMING` state. The load hook is called before any module_init code. Example: a kpatch fix for CVE-2016-5389 utilized the `KPATCH_LOAD_HOOK` and `KPATCH_UNLOAD_HOOK` macros to modify variable `sysctl_tcp_challenge_ack_limit` in-place: ``` +static bool kpatch_write = false; +void kpatch_load_tcp_send_challenge_ack(void) +{ + if (sysctl_tcp_challenge_ack_limit == 100) { + sysctl_tcp_challenge_ack_limit = 1000; + kpatch_write = true; + } +} +void kpatch_unload_tcp_send_challenge_ack(void) +{ + if (kpatch_write && sysctl_tcp_challenge_ack_limit == 1000) + sysctl_tcp_challenge_ack_limit = 100; +} +#include "kpatch-macros.h" +KPATCH_LOAD_HOOK(kpatch_load_tcp_send_challenge_ack); +KPATCH_UNLOAD_HOOK(kpatch_unload_tcp_send_challenge_ack); ``` Don't forget to protect access to the data as needed. Also be careful when upgrading. If patch A has a load hook which writes to X, and then you load patch B which is a superset of A, in some cases you may want to prevent patch B from writing to X, if A is already loaded. ### Use a shadow variable If you need to add a field to an existing data structure, or even many existing data structures, you can use the `kpatch_shadow_*()` functions: * `kpatch_shadow_alloc` - allocates a new shadow variable associated with a given object * `kpatch_shadow_get` - find and return a pointer to a shadow variable * `kpatch_shadow_free` - find and free a shadow variable Example: The `shadow-newpid.patch` integration test demonstrates the usage of these functions. A shadow PID variable is allocated in `do_fork()`: it is associated with the current `struct task_struct *p` value, given a string lookup key of "newpid", sized accordingly, and allocated as per `GFP_KERNEL` flag rules. `kpatch_shadow_alloc` returns a pointer to the shadow variable, so we can dereference and make assignments as usual. In this patch chunk, the shadow `newpid` is allocated then assigned to a rolling `ctr` counter value: ``` + int *newpid; + static int ctr = 0; + + newpid = kpatch_shadow_alloc(p, "newpid", sizeof(*newpid), + GFP_KERNEL); + if (newpid) + *newpid = ctr++; ``` A shadow variable may also be accessed via `kpatch_shadow_get`. Here the patch modifies `task_context_switch_counts()` to fetch the shadow variable associated with the current `struct task_struct *p` object and a "newpid" tag. As in the previous patch chunk, the shadow variable pointer may be accessed as an ordinary pointer type: ``` + int *newpid; + seq_put_decimal_ull(m, "voluntary_ctxt_switches:\t", p->nvcsw); seq_put_decimal_ull(m, "\nnonvoluntary_ctxt_switches:\t", p->nivcsw); seq_putc(m, '\n'); + + newpid = kpatch_shadow_get(p, "newpid"); + if (newpid) + seq_printf(m, "newpid:\t%d\n", *newpid); ``` A shadow variable is freed by calling `kpatch_shadow_free` and providing the object / string key combination. Once freed, the shadow variable is not safe to access: ``` exit_task_work(tsk); exit_thread(tsk); + kpatch_shadow_free(tsk, "newpid"); + /* * Flush inherited counters to the parent - before the parent * gets woken up by child-exit notifications. ``` Notes: * `kpatch_shadow_alloc` initializes only shadow variable metadata. It allocates variable storage via `kmalloc` with the `gfp_t` flags it is given, but otherwise leaves the area untouched. Initialization of a shadow variable is the responsibility of the caller. * As soon as `kpatch_shadow_alloc` creates a shadow variable, its presence will be reported by `kpatch_shadow_get`. Care should be taken to avoid any potential race conditions between a kernel thread that allocates a shadow variable and concurrent threads that may attempt to use it. Data semantic changes --------------------- Part of the stable-tree [backport](https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/commit/fs/aio.c?h=linux-3.10.y&id=6745cb91b5ec93a1b34221279863926fba43d0d7) to fix CVE-2014-0206 changed the reference count semantic of `struct kioctx.reqs_active`. Associating a shadow variable to new instances of this structure can be used by patched code to handle both new (post-patch) and existing (pre-patch) instances. (This example is trimmed to highlight this use-case. Boilerplate code is also required to allocate/free a shadow variable called "reqs_active_v2" whenever a new `struct kioctx` is created/released. No values are ever assigned to the shadow variable.) Shadow variable existence can be verified before applying the new data semantic of the associated object: ``` @@ -678,6 +688,9 @@ void aio_complete(struct kiocb *iocb, lo put_rq: /* everything turned out well, dispose of the aiocb. */ aio_put_req(iocb); + reqs_active_v2 = kpatch_shadow_get(ctx, "reqs_active_v2"); + if (reqs_active_v2) + atomic_dec(&ctx->reqs_active); /* * We have to order our ring_info tail store above and test ``` Likewise, shadow variable non-existence can be tested to continue applying the old data semantic: ``` @@ -705,6 +718,7 @@ static long aio_read_events_ring(struct unsigned head, pos; long ret = 0; int copy_ret; + int *reqs_active_v2; mutex_lock(&ctx->ring_lock); @@ -756,7 +770,9 @@ static long aio_read_events_ring(struct pr_debug("%li h%u t%u\n", ret, head, ctx->tail); - atomic_sub(ret, &ctx->reqs_active); + reqs_active_v2 = kpatch_shadow_get(ctx, "reqs_active_v2"); + if (!reqs_active_v2) + atomic_sub(ret, &ctx->reqs_active); out: mutex_unlock(&ctx->ring_lock); ``` The previous example can be extended to use shadow variable storage to handle locking semantic changes. Consider the [upstream fix](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=1d147bfa64293b2723c4fec50922168658e613ba) for CVE-2014-2706, which added a `ps_lock` to `struct sta_info` to protect critical sections throughout `net/mac80211/sta_info.c`. When allocating a new `struct sta_info`, allocate a corresponding "ps_lock" shadow variable large enough to hold a `spinlock_t` instance, then initialize the spinlock: ``` @@ -333,12 +336,16 @@ struct sta_info *sta_info_alloc(struct ieee80211_sub_if_data *sdata, struct sta_info *sta; struct timespec uptime; int i; + spinlock_t *ps_lock; sta = kzalloc(sizeof(*sta) + local->hw.sta_data_size, gfp); if (!sta) return NULL; spin_lock_init(&sta->lock); + ps_lock = kpatch_shadow_alloc(sta, "ps_lock", sizeof(*ps_lock), gfp); + if (ps_lock) + spin_lock_init(ps_lock); INIT_WORK(&sta->drv_unblock_wk, sta_unblock); INIT_WORK(&sta->ampdu_mlme.work, ieee80211_ba_session_work); mutex_init(&sta->ampdu_mlme.mtx); ``` Patched code can reference the "ps_lock" shadow variable associated with a given `struct sta_info` to determine and apply the correct locking semantic for that instance: ``` @@ -471,6 +475,23 @@ ieee80211_tx_h_unicast_ps_buf(struct ieee80211_tx_data *tx) sta->sta.addr, sta->sta.aid, ac); if (tx->local->total_ps_buffered >= TOTAL_MAX_TX_BUFFER) purge_old_ps_buffers(tx->local); + + /* sync with ieee80211_sta_ps_deliver_wakeup */ + ps_lock = kpatch_shadow_get(sta, "ps_lock"); + if (ps_lock) { + spin_lock(ps_lock); + /* + * STA woke up the meantime and all the frames on ps_tx_buf have + * been queued to pending queue. No reordering can happen, go + * ahead and Tx the packet. + */ + if (!test_sta_flag(sta, WLAN_STA_PS_STA) && + !test_sta_flag(sta, WLAN_STA_PS_DRIVER)) { + spin_unlock(ps_lock); + return TX_CONTINUE; + } + } + if (skb_queue_len(&sta->ps_tx_buf[ac]) >= STA_MAX_TX_BUFFER) { struct sk_buff *old = skb_dequeue(&sta->ps_tx_buf[ac]); ps_dbg(tx->sdata, ``` Init code changes ----------------- Any code which runs in an `__init` function or during module or device initialization is problematic, as it may have already run before the patch was applied. The patch may require a load hook function which detects whether such init code has run, and which rewrites or changes the original initialization to force it into the desired state. Some changes involving hardware init are inherently incompatible with live patching. Header file changes ------------------- When changing header files, be extra careful. If data is being changed, you probably need to modify the patch. See "Data struct changes" above. If a function prototype is being changed, make sure it's not an exported function. Otherwise it could break out-of-tree modules. One way to workaround this is to define an entirely new copy of the function (with updated code) and patch in-tree callers to invoke it rather than the deprecated version. Many header file changes result in a complete rebuild of the kernel tree, which makes kpatch-build have to compare every .o file in the kernel. It slows the build down a lot, and can even fail to build if kpatch-build has any bugs lurking. If it's a trivial header file change, like adding a macro, it's advisable to just move that macro into the .c file where it's needed to avoid changing the header file at all. Dealing with unexpected changed functions ----------------------------------------- In general, it's best to patch as minimally as possible. If kpatch-build is reporting some unexpected function changes, it's always a good idea to try to figure out why it thinks they changed. In many cases you can change the source patch so that they no longer change. Some examples: * If a changed function was inlined, then the callers which inlined the function will also change. In this case there's nothing you can do to prevent the extra changes. * If a changed function was originally inlined, but turned into a callable function after patching, consider adding `__always_inline` to the function definition. Likewise, if a function is only inlined after patching, consider using `noinline` to prevent the compiler from doing so. * If your patch adds a call to a function where the original version of the function's ELF symbol has a .constprop or .isra suffix, and the corresponding patched function doesn't, that means the patch caused gcc to no longer perform an interprocedural optimization, which affects the function and all its callers. If you want to prevent this from happening, copy/paste the function with a new name and call the new function from your patch. * Moving around source code lines can introduce unique instructions if any `__LINE__` preprocessor macros are in use. This can be mitigated by adding any new functions to the bottom of source files, using newline whitespace to maintain original line counts, etc. A more exact fix can be employed by modifying the source code that invokes `__LINE__` and hard-coding the original line number in place. Removing references to static local variables --------------------------------------------- Removing references to static locals will fail to patch unless extra steps are taken. Static locals are basically global variables because they outlive the function's scope. They need to be correlated so that the new function will use the old static local. That way patching the function doesn't inadvertently reset the variable to zero; instead the variable keeps its old value. To work around this limitation one needs to retain the reference to the static local. This might be as simple as adding the variable back in the patched function in a non-functional way and ensuring the compiler doesn't optimize it away. Code removal ------------ Some fixes may replace or completely remove functions and references to them. Remember that kpatch modules can only add new functions and redirect existing functions, so "removed" functions will continue to exist in kernel address space as effectively dead code. That means this patch (source code removal of `cmdline_proc_show`): ``` diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2016-11-30 19:39:49.317737234 +0000 +++ src/fs/proc/cmdline.c 2016-11-30 19:39:52.696737234 +0000 @@ -3,15 +3,15 @@ #include #include -static int cmdline_proc_show(struct seq_file *m, void *v) -{ - seq_printf(m, "%s\n", saved_command_line); - return 0; -} +static int cmdline_proc_show_v2(struct seq_file *m, void *v) +{ + seq_printf(m, "%s kpatch\n", saved_command_line); + return 0; +} static int cmdline_proc_open(struct inode *inode, struct file *file) { - return single_open(file, cmdline_proc_show, NULL); + return single_open(file, cmdline_proc_show_v2, NULL); } static const struct file_operations cmdline_proc_fops = { ``` will generate an equivalent kpatch module to this patch (dead `cmdline_proc_show` left in source): ``` diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2016-11-30 19:39:49.317737234 +0000 +++ src/fs/proc/cmdline.c 2016-11-30 19:39:52.696737234 +0000 @@ -9,9 +9,15 @@ static int cmdline_proc_show(struct seq_ return 0; } +static int cmdline_proc_show_v2(struct seq_file *m, void *v) +{ + seq_printf(m, "%s kpatch\n", saved_command_line); + return 0; +} + static int cmdline_proc_open(struct inode *inode, struct file *file) { - return single_open(file, cmdline_proc_show, NULL); + return single_open(file, cmdline_proc_show_v2, NULL); } static const struct file_operations cmdline_proc_fops = { ``` In both versions, `kpatch-build` will determine that only `cmdline_proc_open` has changed and that `cmdline_proc_show_v2` is a new function. In some patching cases it might be necessary to completely remove the original function to avoid the compiler complaining about a defined, but unused function. This will depend on symbol scope and kernel build options. Other issues ------------ When adding a call to `printk_once()`, `pr_warn_once()`, or any other "once" variation of `printk()`, you'll get the following eror: ``` ERROR: vmx.o: 1 unsupported section change(s) vmx.o: WARNING: unable to correlate static local variable __print_once.60588 used by vmx_update_pi_irte, assuming variable is new vmx.o: changed function: vmx_update_pi_irte vmx.o: data section .data..read_mostly selected for inclusion /usr/lib/kpatch/create-diff-object: unreconcilable difference ``` This error occurs because the `printk_once()` adds a static local variable to the `.data..read_mostly` section. kpatch-build strict disallows any changes to that section, because in some cases a change to this section indicates a bug. To work around this issue, you'll need to manually implement your own "once" logic which doesn't store the static variable in the `.data..read_mostly` section. For example, a `pr_warn_once()` can be replaced with: ``` static bool print_once; ... if (!print_once) { print_once = true; pr_warn("..."); } ``` kpatch-0.5.0/examples/000077500000000000000000000000001321664017000146045ustar00rootroot00000000000000kpatch-0.5.0/examples/tcp_cubic-better-follow-cubic-curve-converted.patch000066400000000000000000000071761321664017000265520ustar00rootroot00000000000000The original patch changes the initialization of 'cubictcp' instance of struct tcp_congestion_ops ('cubictcp.cwnd_event' field). Kpatch intentionally rejects to process such changes. This modification of the patch uses Kpatch load/unload hooks to set 'cubictcp.cwnd_event' when the binary patch is loaded and reset it to NULL when the patch is unloaded. It is still needed to check if changing that field could be problematic due to concurrency issues, etc. 'cwnd_event' callback is used only via tcp_ca_event() function. include/net/tcp.h: static inline void tcp_ca_event(struct sock *sk, const enum tcp_ca_event event) { const struct inet_connection_sock *icsk = inet_csk(sk); if (icsk->icsk_ca_ops->cwnd_event) icsk->icsk_ca_ops->cwnd_event(sk, event); } In turn, tcp_ca_event() is called in a number of places in net/ipv4/tcp_output.c and net/ipv4/tcp_input.c. One problem with this modification of the patch is that it may not be safe to unload it. If it is possible for tcp_ca_event() to run concurrently with the unloading of the patch, it may happen that 'icsk->icsk_ca_ops->cwnd_event' is the address of bictcp_cwnd_event() when tcp_ca_event() checks it but is set to NULL right after. So 'icsk->icsk_ca_ops->cwnd_event(sk, event)' would result in a kernel oops. Whether such scenario is possible or not, it should be analyzed. If it is, then at least, the body of tcp_ca_event() should be made atomic w.r.t. changing 'cwnd_event' in the patch somehow. Perhaps, RCU could be suitable for that: a read-side critical section for the body of tcp_ca_event() with a single read of icsk->icsk_ca_ops->cwnd_event pointer with rcu_dereference(). The pointer could be set by the patch with rcu_assign_pointer(). An alternative suggested by Josh Poimboeuf would be to patch the functions that call 'cwnd_event' callback (tcp_ca_event() in this case) so that they call bictcp_cwnd_event() directly when they detect the cubictcp struct [1]. Note that tcp_ca_event() is inlined in a number of places, so the binary patch will provide replacements for all of the corresponding functions rather than for just one. It is still needed to check if replacing these functions in runtime is safe. References: [1] https://www.redhat.com/archives/kpatch/2015-September/msg00005.html diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c index 894b7ce..9bff8a0 100644 --- a/net/ipv4/tcp_cubic.c +++ b/net/ipv4/tcp_cubic.c @@ -153,6 +153,27 @@ static void bictcp_init(struct sock *sk) tcp_sk(sk)->snd_ssthresh = initial_ssthresh; } +static void bictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event) +{ + if (event == CA_EVENT_TX_START) { + struct bictcp *ca = inet_csk_ca(sk); + u32 now = tcp_time_stamp; + s32 delta; + + delta = now - tcp_sk(sk)->lsndtime; + + /* We were application limited (idle) for a while. + * Shift epoch_start to keep cwnd growth to cubic curve. + */ + if (ca->epoch_start && delta > 0) { + ca->epoch_start += delta; + if (after(ca->epoch_start, now)) + ca->epoch_start = now; + } + return; + } +} + /* calculate the cubic root of x using a table lookup followed by one * Newton-Raphson iteration. * Avg err ~= 0.195% @@ -444,6 +465,20 @@ static struct tcp_congestion_ops cubictcp __read_mostly = { .name = "cubic", }; +void kpatch_load_cubictcp_cwnd_event(void) +{ + cubictcp.cwnd_event = bictcp_cwnd_event; +} + +void kpatch_unload_cubictcp_cwnd_event(void) +{ + cubictcp.cwnd_event = NULL; +} + +#include "kpatch-macros.h" +KPATCH_LOAD_HOOK(kpatch_load_cubictcp_cwnd_event); +KPATCH_UNLOAD_HOOK(kpatch_unload_cubictcp_cwnd_event); + static int __init cubictcp_register(void) { BUILD_BUG_ON(sizeof(struct bictcp) > ICSK_CA_PRIV_SIZE); kpatch-0.5.0/examples/tcp_cubic-better-follow-cubic-curve-original.patch000066400000000000000000000033011321664017000263470ustar00rootroot00000000000000This patch is for 3.10.x. It combines the following commits from the mainline: commit 30927520dbae297182990bb21d08762bcc35ce1d Author: Eric Dumazet Date: Wed Sep 9 21:55:07 2015 -0700 tcp_cubic: better follow cubic curve after idle period commit c2e7204d180f8efc80f27959ca9cf16fa17f67db Author: Eric Dumazet Date: Thu Sep 17 08:38:00 2015 -0700 tcp_cubic: do not set epoch_start in the future References: http://www.phoronix.com/scan.php?page=news_item&px=Google-Fixes-TCP-Linux diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c index 894b7ce..872b3a0 100644 --- a/net/ipv4/tcp_cubic.c +++ b/net/ipv4/tcp_cubic.c @@ -153,6 +153,27 @@ static void bictcp_init(struct sock *sk) tcp_sk(sk)->snd_ssthresh = initial_ssthresh; } +static void bictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event) +{ + if (event == CA_EVENT_TX_START) { + struct bictcp *ca = inet_csk_ca(sk); + u32 now = tcp_time_stamp; + s32 delta; + + delta = now - tcp_sk(sk)->lsndtime; + + /* We were application limited (idle) for a while. + * Shift epoch_start to keep cwnd growth to cubic curve. + */ + if (ca->epoch_start && delta > 0) { + ca->epoch_start += delta; + if (after(ca->epoch_start, now)) + ca->epoch_start = now; + } + return; + } +} + /* calculate the cubic root of x using a table lookup followed by one * Newton-Raphson iteration. * Avg err ~= 0.195% @@ -439,6 +460,7 @@ static struct tcp_congestion_ops cubictcp __read_mostly = { .cong_avoid = bictcp_cong_avoid, .set_state = bictcp_state, .undo_cwnd = bictcp_undo_cwnd, + .cwnd_event = bictcp_cwnd_event, .pkts_acked = bictcp_acked, .owner = THIS_MODULE, .name = "cubic", kpatch-0.5.0/kmod/000077500000000000000000000000001321664017000137205ustar00rootroot00000000000000kpatch-0.5.0/kmod/Makefile000066400000000000000000000012031321664017000153540ustar00rootroot00000000000000include ../Makefile.inc KPATCH_BUILD ?= /lib/modules/$(shell uname -r)/build KERNELRELEASE := $(lastword $(subst /, , $(dir $(KPATCH_BUILD)))) all: clean ifeq ($(BUILDMOD),yes) $(MAKE) -C core endif install: ifeq ($(BUILDMOD),yes) $(INSTALL) -d $(MODULESDIR)/$(KERNELRELEASE) $(INSTALL) -m 644 core/kpatch.ko $(MODULESDIR)/$(KERNELRELEASE) $(INSTALL) -m 644 core/Module.symvers $(MODULESDIR)/$(KERNELRELEASE) endif $(INSTALL) -d $(DATADIR)/patch $(INSTALL) -m 644 patch/* $(DATADIR)/patch uninstall: ifeq ($(BUILDMOD),yes) $(RM) -R $(MODULESDIR) endif $(RM) -R $(DATADIR) clean: ifeq ($(BUILDMOD),yes) $(MAKE) -C core clean endif kpatch-0.5.0/kmod/core/000077500000000000000000000000001321664017000146505ustar00rootroot00000000000000kpatch-0.5.0/kmod/core/Makefile000066400000000000000000000011711321664017000163100ustar00rootroot00000000000000# make rules KPATCH_BUILD ?= /lib/modules/$(shell uname -r)/build KERNELRELEASE := $(lastword $(subst /, , $(dir $(KPATCH_BUILD)))) THISDIR := $(abspath $(dir $(lastword $(MAKEFILE_LIST)))) ifeq ($(wildcard $(KPATCH_BUILD)),) $(error $(KPATCH_BUILD) doesn\'t exist. Try installing the kernel-devel-$(KERNELRELEASE) RPM or linux-headers-$(KERNELRELEASE) DEB.) endif KPATCH_MAKE = $(MAKE) -C $(KPATCH_BUILD) M=$(THISDIR) kpatch.ko: core.c $(KPATCH_MAKE) kpatch.ko all: kpatch.ko clean: $(RM) -Rf .*.o.cmd .*.ko.cmd .tmp_versions *.o *.ko *.mod.c \ Module.symvers # kbuild rules obj-m := kpatch.o kpatch-y := core.o shadow.o kpatch-0.5.0/kmod/core/core.c000066400000000000000000000706571321664017000157630ustar00rootroot00000000000000/* * Copyright (C) 2014 Seth Jennings * Copyright (C) 2013-2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, see . */ /* * kpatch core module * * Patch modules register with this module to redirect old functions to new * functions. * * For each function patched by the module we must: * - Call stop_machine * - Ensure that no task has the old function in its call stack * - Add the new function address to kpatch_func_hash * * After that, each call to the old function calls into kpatch_ftrace_handler() * which finds the new function in kpatch_func_hash table and updates the * return instruction pointer so that ftrace will return to the new function. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include "kpatch.h" #ifndef UTS_UBUNTU_RELEASE_ABI #define UTS_UBUNTU_RELEASE_ABI 0 #endif #if !defined(CONFIG_FUNCTION_TRACER) || \ !defined(CONFIG_HAVE_FENTRY) || \ !defined(CONFIG_MODULES) || \ !defined(CONFIG_SYSFS) || \ !defined(CONFIG_KALLSYMS_ALL) #error "CONFIG_FUNCTION_TRACER, CONFIG_HAVE_FENTRY, CONFIG_MODULES, CONFIG_SYSFS, CONFIG_KALLSYMS_ALL kernel config options are required" #endif #define KPATCH_HASH_BITS 8 static DEFINE_HASHTABLE(kpatch_func_hash, KPATCH_HASH_BITS); static DEFINE_SEMAPHORE(kpatch_mutex); static LIST_HEAD(kpmod_list); static int kpatch_num_patched; struct kobject *kpatch_root_kobj; EXPORT_SYMBOL_GPL(kpatch_root_kobj); struct kpatch_kallsyms_args { const char *objname; const char *name; unsigned long addr; unsigned long count; unsigned long pos; }; /* this is a double loop, use goto instead of break */ #define do_for_each_linked_func(kpmod, func) { \ struct kpatch_object *_object; \ list_for_each_entry(_object, &kpmod->objects, list) { \ if (!kpatch_object_linked(_object)) \ continue; \ list_for_each_entry(func, &_object->funcs, list) { #define while_for_each_linked_func() \ } \ } \ } /* * The kpatch core module has a state machine which allows for proper * synchronization with kpatch_ftrace_handler() when it runs in NMI context. * * +-----------------------------------------------------+ * | | * | + * v +---> KPATCH_STATE_SUCCESS * KPATCH_STATE_IDLE +---> KPATCH_STATE_UPDATING | * ^ +---> KPATCH_STATE_FAILURE * | + * | | * +-----------------------------------------------------+ * * KPATCH_STATE_IDLE: No updates are pending. The func hash is valid, and the * reader doesn't need to check func->op. * * KPATCH_STATE_UPDATING: An update is in progress. The reader must call * kpatch_state_finish(KPATCH_STATE_FAILURE) before accessing the func hash. * * KPATCH_STATE_FAILURE: An update failed, and the func hash might be * inconsistent (pending patched funcs might not have been removed yet). If * func->op is KPATCH_OP_PATCH, then rollback to the previous version of the * func. * * KPATCH_STATE_SUCCESS: An update succeeded, but the func hash might be * inconsistent (pending unpatched funcs might not have been removed yet). If * func->op is KPATCH_OP_UNPATCH, then rollback to the previous version of the * func. */ enum { KPATCH_STATE_IDLE, KPATCH_STATE_UPDATING, KPATCH_STATE_SUCCESS, KPATCH_STATE_FAILURE, }; static atomic_t kpatch_state; static int (*kpatch_set_memory_rw)(unsigned long addr, int numpages); static int (*kpatch_set_memory_ro)(unsigned long addr, int numpages); #define MAX_STACK_TRACE_DEPTH 64 static unsigned long stack_entries[MAX_STACK_TRACE_DEPTH]; struct stack_trace trace = { .max_entries = ARRAY_SIZE(stack_entries), .entries = &stack_entries[0], }; static inline void kpatch_state_idle(void) { int state = atomic_read(&kpatch_state); WARN_ON(state != KPATCH_STATE_SUCCESS && state != KPATCH_STATE_FAILURE); atomic_set(&kpatch_state, KPATCH_STATE_IDLE); } static inline void kpatch_state_updating(void) { WARN_ON(atomic_read(&kpatch_state) != KPATCH_STATE_IDLE); atomic_set(&kpatch_state, KPATCH_STATE_UPDATING); } /* If state is updating, change it to success or failure and return new state */ static inline int kpatch_state_finish(int state) { int result; WARN_ON(state != KPATCH_STATE_SUCCESS && state != KPATCH_STATE_FAILURE); result = atomic_cmpxchg(&kpatch_state, KPATCH_STATE_UPDATING, state); return result == KPATCH_STATE_UPDATING ? state : result; } static struct kpatch_func *kpatch_get_func(unsigned long ip) { struct kpatch_func *f; /* Here, we have to use rcu safe hlist because of NMI concurrency */ hash_for_each_possible_rcu(kpatch_func_hash, f, node, ip) if (f->old_addr == ip) return f; return NULL; } static struct kpatch_func *kpatch_get_prev_func(struct kpatch_func *f, unsigned long ip) { hlist_for_each_entry_continue_rcu(f, node) if (f->old_addr == ip) return f; return NULL; } static inline bool kpatch_object_linked(struct kpatch_object *object) { return object->mod || !strcmp(object->name, "vmlinux"); } static inline int kpatch_compare_addresses(unsigned long stack_addr, unsigned long func_addr, unsigned long func_size, const char *func_name) { if (stack_addr >= func_addr && stack_addr < func_addr + func_size) { pr_err("activeness safety check failed for %s\n", func_name); return -EBUSY; } return 0; } static int kpatch_backtrace_address_verify(struct kpatch_module *kpmod, unsigned long address) { struct kpatch_func *func; int i; int ret; /* check kpmod funcs */ do_for_each_linked_func(kpmod, func) { unsigned long func_addr, func_size; const char *func_name; struct kpatch_func *active_func; if (func->force) continue; active_func = kpatch_get_func(func->old_addr); if (!active_func) { /* patching an unpatched func */ func_addr = func->old_addr; func_size = func->old_size; func_name = func->name; } else { /* repatching or unpatching */ func_addr = active_func->new_addr; func_size = active_func->new_size; func_name = active_func->name; } ret = kpatch_compare_addresses(address, func_addr, func_size, func_name); if (ret) return ret; } while_for_each_linked_func(); /* in the replace case, need to check the func hash as well */ hash_for_each_rcu(kpatch_func_hash, i, func, node) { if (func->op == KPATCH_OP_UNPATCH && !func->force) { ret = kpatch_compare_addresses(address, func->new_addr, func->new_size, func->name); if (ret) return ret; } } return ret; } /* * Verify activeness safety, i.e. that none of the to-be-patched functions are * on the stack of any task. * * This function is called from stop_machine() context. */ static int kpatch_verify_activeness_safety(struct kpatch_module *kpmod) { struct task_struct *g, *t; int i; int ret = 0; /* Check the stacks of all tasks. */ do_each_thread(g, t) { trace.nr_entries = 0; save_stack_trace_tsk(t, &trace); if (trace.nr_entries >= trace.max_entries) { ret = -EBUSY; pr_err("more than %u trace entries!\n", trace.max_entries); goto out; } for (i = 0; i < trace.nr_entries; i++) { if (trace.entries[i] == ULONG_MAX) break; ret = kpatch_backtrace_address_verify(kpmod, trace.entries[i]); if (ret) goto out; } } while_each_thread(g, t); out: if (ret) { pr_err("PID: %d Comm: %.20s\n", t->pid, t->comm); for (i = 0; i < trace.nr_entries; i++) { if (trace.entries[i] == ULONG_MAX) break; pr_err(" [<%pK>] %pB\n", (void *)trace.entries[i], (void *)trace.entries[i]); } } return ret; } /* Called from stop_machine */ static int kpatch_apply_patch(void *data) { struct kpatch_module *kpmod = data; struct kpatch_func *func; struct kpatch_hook *hook; struct kpatch_object *object; int ret; ret = kpatch_verify_activeness_safety(kpmod); if (ret) { kpatch_state_finish(KPATCH_STATE_FAILURE); return ret; } /* tentatively add the new funcs to the global func hash */ do_for_each_linked_func(kpmod, func) { hash_add_rcu(kpatch_func_hash, &func->node, func->old_addr); } while_for_each_linked_func(); /* memory barrier between func hash add and state change */ smp_wmb(); /* * Check if any inconsistent NMI has happened while updating. If not, * move to success state. */ ret = kpatch_state_finish(KPATCH_STATE_SUCCESS); if (ret == KPATCH_STATE_FAILURE) { pr_err("NMI activeness safety check failed\n"); /* Failed, we have to rollback patching process */ do_for_each_linked_func(kpmod, func) { hash_del_rcu(&func->node); } while_for_each_linked_func(); return -EBUSY; } /* run any user-defined load hooks */ list_for_each_entry(object, &kpmod->objects, list) { if (!kpatch_object_linked(object)) continue; list_for_each_entry(hook, &object->hooks_load, list) (*hook->hook)(); } return 0; } /* Called from stop_machine */ static int kpatch_remove_patch(void *data) { struct kpatch_module *kpmod = data; struct kpatch_func *func; struct kpatch_hook *hook; struct kpatch_object *object; int ret; ret = kpatch_verify_activeness_safety(kpmod); if (ret) { kpatch_state_finish(KPATCH_STATE_FAILURE); return ret; } /* Check if any inconsistent NMI has happened while updating */ ret = kpatch_state_finish(KPATCH_STATE_SUCCESS); if (ret == KPATCH_STATE_FAILURE) return -EBUSY; /* Succeeded, remove all updating funcs from hash table */ do_for_each_linked_func(kpmod, func) { hash_del_rcu(&func->node); } while_for_each_linked_func(); /* run any user-defined unload hooks */ list_for_each_entry(object, &kpmod->objects, list) { if (!kpatch_object_linked(object)) continue; list_for_each_entry(hook, &object->hooks_unload, list) (*hook->hook)(); } return 0; } /* * This is where the magic happens. Update regs->ip to tell ftrace to return * to the new function. * * If there are multiple patch modules that have registered to patch the same * function, the last one to register wins, as it'll be first in the hash * bucket. */ static void notrace kpatch_ftrace_handler(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *fops, struct pt_regs *regs) { struct kpatch_func *func; int state; preempt_disable_notrace(); if (likely(!in_nmi())) func = kpatch_get_func(ip); else { /* Checking for NMI inconsistency */ state = kpatch_state_finish(KPATCH_STATE_FAILURE); /* no memory reordering between state and func hash read */ smp_rmb(); func = kpatch_get_func(ip); if (likely(state == KPATCH_STATE_IDLE)) goto done; if (state == KPATCH_STATE_SUCCESS) { /* * Patching succeeded. If the function was being * unpatched, roll back to the previous version. */ if (func && func->op == KPATCH_OP_UNPATCH) func = kpatch_get_prev_func(func, ip); } else { /* * Patching failed. If the function was being patched, * roll back to the previous version. */ if (func && func->op == KPATCH_OP_PATCH) func = kpatch_get_prev_func(func, ip); } } done: if (func) regs->ip = func->new_addr + MCOUNT_INSN_SIZE; preempt_enable_notrace(); } #if LINUX_VERSION_CODE < KERNEL_VERSION(3, 19, 0) #define FTRACE_OPS_FL_IPMODIFY 0 #endif static struct ftrace_ops kpatch_ftrace_ops __read_mostly = { .func = kpatch_ftrace_handler, .flags = FTRACE_OPS_FL_SAVE_REGS | FTRACE_OPS_FL_IPMODIFY, }; static int kpatch_ftrace_add_func(unsigned long ip) { int ret; /* check if any other patch modules have also patched this func */ if (kpatch_get_func(ip)) return 0; ret = ftrace_set_filter_ip(&kpatch_ftrace_ops, ip, 0, 0); if (ret) { pr_err("can't set ftrace filter at address 0x%lx\n", ip); return ret; } if (!kpatch_num_patched) { ret = register_ftrace_function(&kpatch_ftrace_ops); if (ret) { pr_err("can't register ftrace handler\n"); ftrace_set_filter_ip(&kpatch_ftrace_ops, ip, 1, 0); return ret; } } kpatch_num_patched++; return 0; } static int kpatch_ftrace_remove_func(unsigned long ip) { int ret; /* check if any other patch modules have also patched this func */ if (kpatch_get_func(ip)) return 0; if (kpatch_num_patched == 1) { ret = unregister_ftrace_function(&kpatch_ftrace_ops); if (ret) { pr_err("can't unregister ftrace handler\n"); return ret; } } kpatch_num_patched--; ret = ftrace_set_filter_ip(&kpatch_ftrace_ops, ip, 1, 0); if (ret) { pr_err("can't remove ftrace filter at address 0x%lx\n", ip); return ret; } return 0; } static int kpatch_kallsyms_callback(void *data, const char *name, struct module *mod, unsigned long addr) { struct kpatch_kallsyms_args *args = data; bool vmlinux = !strcmp(args->objname, "vmlinux"); if ((mod && vmlinux) || (!mod && !vmlinux)) return 0; if (strcmp(args->name, name)) return 0; if (!vmlinux && strcmp(args->objname, mod->name)) return 0; args->addr = addr; args->count++; /* * Finish the search when the symbol is found for the desired position * or the position is not defined for a non-unique symbol. */ if ((args->pos && (args->count == args->pos)) || (!args->pos && (args->count > 1))) { return 1; } return 0; } static int kpatch_find_object_symbol(const char *objname, const char *name, unsigned long sympos, unsigned long *addr) { struct kpatch_kallsyms_args args = { .objname = objname, .name = name, .addr = 0, .count = 0, .pos = sympos, }; mutex_lock(&module_mutex); kallsyms_on_each_symbol(kpatch_kallsyms_callback, &args); mutex_unlock(&module_mutex); /* * Ensure an address was found. If sympos is 0, ensure symbol is unique; * otherwise ensure the symbol position count matches sympos. */ if (args.addr == 0) pr_err("symbol '%s' not found in symbol table\n", name); else if (args.count > 1 && sympos == 0) { pr_err("unresolvable ambiguity for symbol '%s' in object '%s'\n", name, objname); } else if (sympos != args.count && sympos > 0) { pr_err("symbol position %lu for symbol '%s' in object '%s' not found\n", sympos, name, objname); } else { *addr = args.addr; return 0; } *addr = 0; return -EINVAL; } /* * External symbols are located outside the parent object (where the parent * object is either vmlinux or the kmod being patched). */ static int kpatch_find_external_symbol(const char *objname, const char *name, unsigned long sympos, unsigned long *addr) { const struct kernel_symbol *sym; /* first, check if it's an exported symbol */ preempt_disable(); sym = find_symbol(name, NULL, NULL, true, true); preempt_enable(); if (sym) { *addr = sym->value; return 0; } /* otherwise check if it's in another .o within the patch module */ return kpatch_find_object_symbol(objname, name, sympos, addr); } static int kpatch_write_relocations(struct kpatch_module *kpmod, struct kpatch_object *object) { int ret, size, readonly = 0, numpages; struct kpatch_dynrela *dynrela; u64 loc, val; #if (( LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) ) || \ ( LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 0) && \ UTS_UBUNTU_RELEASE_ABI >= 7 ) \ ) unsigned long core = (unsigned long)kpmod->mod->core_layout.base; unsigned long core_size = kpmod->mod->core_layout.size; #else unsigned long core = (unsigned long)kpmod->mod->module_core; unsigned long core_size = kpmod->mod->core_size; #endif list_for_each_entry(dynrela, &object->dynrelas, list) { if (dynrela->external) ret = kpatch_find_external_symbol(kpmod->mod->name, dynrela->name, dynrela->sympos, &dynrela->src); else ret = kpatch_find_object_symbol(object->name, dynrela->name, dynrela->sympos, &dynrela->src); if (ret) { pr_err("unable to find symbol '%s'\n", dynrela->name); return ret; } switch (dynrela->type) { case R_X86_64_NONE: continue; case R_X86_64_PC32: loc = dynrela->dest; val = (u32)(dynrela->src + dynrela->addend - dynrela->dest); size = 4; break; case R_X86_64_32S: loc = dynrela->dest; val = (s32)dynrela->src + dynrela->addend; size = 4; break; case R_X86_64_64: loc = dynrela->dest; val = dynrela->src; size = 8; break; default: pr_err("unsupported rela type %ld for source %s (0x%lx <- 0x%lx)\n", dynrela->type, dynrela->name, dynrela->dest, dynrela->src); return -EINVAL; } if (loc < core || loc >= core + core_size) { pr_err("bad dynrela location 0x%llx for symbol %s\n", loc, dynrela->name); return -EINVAL; } /* * Skip it if the instruction to be relocated has been * changed already (paravirt or alternatives may do this). */ if (memchr_inv((void *)loc, 0, size)) { pr_notice("Skipped dynrela for %s (0x%lx <- 0x%lx): the instruction has been changed already.\n", dynrela->name, dynrela->dest, dynrela->src); pr_notice_once( "This is not necessarily a bug but it may indicate in some cases " "that the binary patch does not handle paravirt operations, alternatives or the like properly.\n"); continue; } #if defined(CONFIG_DEBUG_SET_MODULE_RONX) || defined(CONFIG_ARCH_HAS_SET_MEMORY) #if (( LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) ) || \ ( LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 0) && \ UTS_UBUNTU_RELEASE_ABI >= 7 ) \ ) readonly = (loc < core + kpmod->mod->core_layout.ro_size); #else readonly = (loc < core + kpmod->mod->core_ro_size); #endif #endif numpages = (PAGE_SIZE - (loc & ~PAGE_MASK) >= size) ? 1 : 2; if (readonly) kpatch_set_memory_rw(loc & PAGE_MASK, numpages); ret = probe_kernel_write((void *)loc, &val, size); if (readonly) kpatch_set_memory_ro(loc & PAGE_MASK, numpages); if (ret) { pr_err("write to 0x%llx failed for symbol %s\n", loc, dynrela->name); return ret; } } return 0; } static int kpatch_unlink_object(struct kpatch_object *object) { struct kpatch_func *func; int ret; list_for_each_entry(func, &object->funcs, list) { if (!func->old_addr) continue; ret = kpatch_ftrace_remove_func(func->old_addr); if (ret) { WARN(1, "can't unregister ftrace for address 0x%lx\n", func->old_addr); return ret; } } if (object->mod) module_put(object->mod); return 0; } /* * Link to a to-be-patched object in preparation for patching it. * * - Find the object module * - Write patch module relocations which reference the object * - Calculate the patched functions' addresses * - Register them with ftrace */ static int kpatch_link_object(struct kpatch_module *kpmod, struct kpatch_object *object) { struct module *mod = NULL; struct kpatch_func *func, *func_err = NULL; int ret; bool vmlinux = !strcmp(object->name, "vmlinux"); if (!vmlinux) { mutex_lock(&module_mutex); mod = find_module(object->name); if (!mod) { /* * The module hasn't been loaded yet. We can patch it * later in kpatch_module_notify(). */ mutex_unlock(&module_mutex); return 0; } /* should never fail because we have the mutex */ WARN_ON(!try_module_get(mod)); mutex_unlock(&module_mutex); object->mod = mod; } ret = kpatch_write_relocations(kpmod, object); if (ret) goto err_put; list_for_each_entry(func, &object->funcs, list) { /* lookup the old location */ ret = kpatch_find_object_symbol(object->name, func->name, func->sympos, &func->old_addr); if (ret) { func_err = func; goto err_ftrace; } /* add to ftrace filter and register handler if needed */ ret = kpatch_ftrace_add_func(func->old_addr); if (ret) { func_err = func; goto err_ftrace; } } return 0; err_ftrace: list_for_each_entry(func, &object->funcs, list) { if (func == func_err) break; WARN_ON(kpatch_ftrace_remove_func(func->old_addr)); } err_put: if (!vmlinux) module_put(mod); return ret; } static int kpatch_module_notify(struct notifier_block *nb, unsigned long action, void *data) { struct module *mod = data; struct kpatch_module *kpmod; struct kpatch_object *object; struct kpatch_func *func; struct kpatch_hook *hook; int ret = 0; bool found = false; if (action != MODULE_STATE_COMING) return 0; down(&kpatch_mutex); list_for_each_entry(kpmod, &kpmod_list, list) { list_for_each_entry(object, &kpmod->objects, list) { if (kpatch_object_linked(object)) continue; if (!strcmp(object->name, mod->name)) { found = true; goto done; } } } done: if (!found) goto out; ret = kpatch_link_object(kpmod, object); if (ret) goto out; BUG_ON(!object->mod); pr_notice("patching newly loaded module '%s'\n", object->name); /* run any user-defined load hooks */ list_for_each_entry(hook, &object->hooks_load, list) (*hook->hook)(); /* add to the global func hash */ list_for_each_entry(func, &object->funcs, list) hash_add_rcu(kpatch_func_hash, &func->node, func->old_addr); out: up(&kpatch_mutex); /* no way to stop the module load on error */ WARN(ret, "error (%d) patching newly loaded module '%s'\n", ret, object->name); return 0; } int kpatch_register(struct kpatch_module *kpmod, bool replace) { int ret, i, force = 0; struct kpatch_object *object, *object_err = NULL; struct kpatch_func *func; if (!kpmod->mod || list_empty(&kpmod->objects)) return -EINVAL; down(&kpatch_mutex); if (kpmod->enabled) { ret = -EINVAL; goto err_up; } list_add_tail(&kpmod->list, &kpmod_list); if (!try_module_get(kpmod->mod)) { ret = -ENODEV; goto err_list; } list_for_each_entry(object, &kpmod->objects, list) { ret = kpatch_link_object(kpmod, object); if (ret) { object_err = object; goto err_unlink; } if (!kpatch_object_linked(object)) { pr_notice("delaying patch of unloaded module '%s'\n", object->name); continue; } if (strcmp(object->name, "vmlinux")) pr_notice("patching module '%s'\n", object->name); list_for_each_entry(func, &object->funcs, list) func->op = KPATCH_OP_PATCH; } if (replace) hash_for_each_rcu(kpatch_func_hash, i, func, node) func->op = KPATCH_OP_UNPATCH; /* memory barrier between func hash and state write */ smp_wmb(); kpatch_state_updating(); /* * Idle the CPUs, verify activeness safety, and atomically make the new * functions visible to the ftrace handler. */ ret = stop_machine(kpatch_apply_patch, kpmod, NULL); /* * For the replace case, remove any obsolete funcs from the hash and * the ftrace filter, and disable the owning patch module so that it * can be removed. */ if (!ret && replace) { struct kpatch_module *kpmod2, *safe; hash_for_each_rcu(kpatch_func_hash, i, func, node) { if (func->op != KPATCH_OP_UNPATCH) continue; if (func->force) force = 1; hash_del_rcu(&func->node); WARN_ON(kpatch_ftrace_remove_func(func->old_addr)); } list_for_each_entry_safe(kpmod2, safe, &kpmod_list, list) { if (kpmod == kpmod2) continue; kpmod2->enabled = false; pr_notice("unloaded patch module '%s'\n", kpmod2->mod->name); /* * Don't allow modules with forced functions to be * removed because they might still be in use. */ if (!force) module_put(kpmod2->mod); list_del(&kpmod2->list); } } /* memory barrier between func hash and state write */ smp_wmb(); /* NMI handlers can return to normal now */ kpatch_state_idle(); /* * Wait for all existing NMI handlers to complete so that they don't * see any changes to funcs or funcs->op that might occur after this * point. * * Any NMI handlers starting after this point will see the IDLE state. */ synchronize_rcu(); if (ret) goto err_ops; do_for_each_linked_func(kpmod, func) { func->op = KPATCH_OP_NONE; } while_for_each_linked_func(); /* HAS_MODULE_TAINT - upstream 2992ef29ae01 "livepatch/module: make TAINT_LIVEPATCH module-specific" */ /* HAS_MODULE_TAINT_LONG - upstream 7fd8329ba502 "taint/module: Clean up global and module taint flags handling" */ #ifdef RHEL_RELEASE_CODE # if RHEL_RELEASE_CODE >= RHEL_RELEASE_VERSION(7, 4) # define HAS_MODULE_TAINT # endif #elif LINUX_VERSION_CODE >= KERNEL_VERSION(4, 10, 0) # define HAS_MODULE_TAINT_LONG #elif LINUX_VERSION_CODE >= KERNEL_VERSION(4, 9, 0) # define HAS_MODULE_TAINT #endif #ifdef TAINT_LIVEPATCH pr_notice_once("tainting kernel with TAINT_LIVEPATCH\n"); add_taint(TAINT_LIVEPATCH, LOCKDEP_STILL_OK); # ifdef HAS_MODULE_TAINT_LONG set_bit(TAINT_LIVEPATCH, &kpmod->mod->taints); # elif defined(HAS_MODULE_TAINT) kpmod->mod->taints |= (1 << TAINT_LIVEPATCH); # endif #else pr_notice_once("tainting kernel with TAINT_USER\n"); add_taint(TAINT_USER, LOCKDEP_STILL_OK); #endif pr_notice("loaded patch module '%s'\n", kpmod->mod->name); kpmod->enabled = true; up(&kpatch_mutex); return 0; err_ops: if (replace) hash_for_each_rcu(kpatch_func_hash, i, func, node) func->op = KPATCH_OP_NONE; err_unlink: list_for_each_entry(object, &kpmod->objects, list) { if (object == object_err) break; if (!kpatch_object_linked(object)) continue; WARN_ON(kpatch_unlink_object(object)); } module_put(kpmod->mod); err_list: list_del(&kpmod->list); err_up: up(&kpatch_mutex); return ret; } EXPORT_SYMBOL(kpatch_register); int kpatch_unregister(struct kpatch_module *kpmod) { struct kpatch_object *object; struct kpatch_func *func; int ret, force = 0; down(&kpatch_mutex); if (!kpmod->enabled) { ret = -EINVAL; goto out; } do_for_each_linked_func(kpmod, func) { func->op = KPATCH_OP_UNPATCH; if (func->force) force = 1; } while_for_each_linked_func(); /* memory barrier between func hash and state write */ smp_wmb(); kpatch_state_updating(); ret = stop_machine(kpatch_remove_patch, kpmod, NULL); /* NMI handlers can return to normal now */ kpatch_state_idle(); /* * Wait for all existing NMI handlers to complete so that they don't * see any changes to funcs or funcs->op that might occur after this * point. * * Any NMI handlers starting after this point will see the IDLE state. */ synchronize_rcu(); if (ret) { do_for_each_linked_func(kpmod, func) { func->op = KPATCH_OP_NONE; } while_for_each_linked_func(); goto out; } list_for_each_entry(object, &kpmod->objects, list) { if (!kpatch_object_linked(object)) continue; ret = kpatch_unlink_object(object); if (ret) goto out; } pr_notice("unloaded patch module '%s'\n", kpmod->mod->name); kpmod->enabled = false; /* * Don't allow modules with forced functions to be removed because they * might still be in use. */ if (!force) module_put(kpmod->mod); list_del(&kpmod->list); out: up(&kpatch_mutex); return ret; } EXPORT_SYMBOL(kpatch_unregister); static struct notifier_block kpatch_module_nb = { .notifier_call = kpatch_module_notify, .priority = INT_MIN, /* called last */ }; static int kpatch_init(void) { int ret; kpatch_set_memory_rw = (void *)kallsyms_lookup_name("set_memory_rw"); if (!kpatch_set_memory_rw) { pr_err("can't find set_memory_rw symbol\n"); return -ENXIO; } kpatch_set_memory_ro = (void *)kallsyms_lookup_name("set_memory_ro"); if (!kpatch_set_memory_ro) { pr_err("can't find set_memory_ro symbol\n"); return -ENXIO; } kpatch_root_kobj = kobject_create_and_add("kpatch", kernel_kobj); if (!kpatch_root_kobj) return -ENOMEM; ret = register_module_notifier(&kpatch_module_nb); if (ret) goto err_root_kobj; return 0; err_root_kobj: kobject_put(kpatch_root_kobj); return ret; } static void kpatch_exit(void) { rcu_barrier(); WARN_ON(kpatch_num_patched != 0); WARN_ON(unregister_module_notifier(&kpatch_module_nb)); kobject_put(kpatch_root_kobj); } module_init(kpatch_init); module_exit(kpatch_exit); MODULE_LICENSE("GPL"); kpatch-0.5.0/kmod/core/kpatch.h000066400000000000000000000045541321664017000163030ustar00rootroot00000000000000/* * kpatch.h * * Copyright (C) 2014 Seth Jennings * Copyright (C) 2013-2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, see . * * Contains the API for the core kpatch module used by the patch modules */ #ifndef _KPATCH_H_ #define _KPATCH_H_ #include #include enum kpatch_op { KPATCH_OP_NONE, KPATCH_OP_PATCH, KPATCH_OP_UNPATCH, }; struct kpatch_func { /* public */ unsigned long new_addr; unsigned long new_size; unsigned long old_addr; unsigned long old_size; unsigned long sympos; const char *name; struct list_head list; int force; /* private */ struct hlist_node node; enum kpatch_op op; struct kobject kobj; }; struct kpatch_dynrela { unsigned long dest; unsigned long src; unsigned long type; unsigned long sympos; const char *name; int addend; int external; struct list_head list; }; struct kpatch_hook { struct list_head list; void (*hook)(void); }; struct kpatch_object { struct list_head list; const char *name; struct list_head funcs; struct list_head dynrelas; struct list_head hooks_load; struct list_head hooks_unload; /* private */ struct module *mod; struct kobject kobj; }; struct kpatch_module { /* public */ struct module *mod; struct list_head objects; /* public read-only */ bool enabled; /* private */ struct list_head list; struct kobject kobj; }; extern struct kobject *kpatch_root_kobj; extern int kpatch_register(struct kpatch_module *kpmod, bool replace); extern int kpatch_unregister(struct kpatch_module *kpmod); extern void *kpatch_shadow_alloc(void *obj, char *var, size_t size, gfp_t gfp); extern void kpatch_shadow_free(void *obj, char *var); extern void *kpatch_shadow_get(void *obj, char *var); #endif /* _KPATCH_H_ */ kpatch-0.5.0/kmod/core/shadow.c000066400000000000000000000105051321664017000163020ustar00rootroot00000000000000/* * Copyright (C) 2014 Josh Poimboeuf * Copyright (C) 2014 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, see . */ /* * kpatch shadow variables * * These functions can be used to add new "shadow" fields to existing data * structures. For example, to allocate a "newpid" variable associated with an * instance of task_struct, and assign it a value of 1000: * * struct task_struct *tsk = current; * int *newpid; * newpid = kpatch_shadow_alloc(tsk, "newpid", sizeof(int), GFP_KERNEL); * if (newpid) * *newpid = 1000; * * To retrieve a pointer to the variable: * * struct task_struct *tsk = current; * int *newpid; * newpid = kpatch_shadow_get(tsk, "newpid"); * if (newpid) * printk("task newpid = %d\n", *newpid); // prints "task newpid = 1000" * * To free it: * * kpatch_shadow_free(tsk, "newpid"); */ #include #include #include "kpatch.h" static DEFINE_HASHTABLE(kpatch_shadow_hash, 12); static DEFINE_SPINLOCK(kpatch_shadow_lock); struct kpatch_shadow { struct hlist_node node; struct rcu_head rcu_head; void *obj; union { char *var; /* assumed to be 4-byte aligned */ unsigned long flags; }; void *data; }; #define SHADOW_FLAG_INPLACE 0x1 #define SHADOW_FLAG_RESERVED0 0x2 /* reserved for future use */ #define SHADOW_FLAG_MASK 0x3 #define SHADOW_PTR_MASK (~(SHADOW_FLAG_MASK)) static inline void shadow_set_inplace(struct kpatch_shadow *shadow) { shadow->flags |= SHADOW_FLAG_INPLACE; } static inline int shadow_is_inplace(struct kpatch_shadow *shadow) { return shadow->flags & SHADOW_FLAG_INPLACE; } static inline char *shadow_var(struct kpatch_shadow *shadow) { return (char *)((unsigned long)shadow->var & SHADOW_PTR_MASK); } void *kpatch_shadow_alloc(void *obj, char *var, size_t size, gfp_t gfp) { unsigned long flags; struct kpatch_shadow *shadow; shadow = kmalloc(sizeof(*shadow), gfp); if (!shadow) return NULL; shadow->obj = obj; shadow->var = kstrdup(var, gfp); if (!shadow->var) { kfree(shadow); return NULL; } if (size <= sizeof(shadow->data)) { shadow->data = &shadow->data; shadow_set_inplace(shadow); } else { shadow->data = kmalloc(size, gfp); if (!shadow->data) { kfree(shadow->var); kfree(shadow); return NULL; } } spin_lock_irqsave(&kpatch_shadow_lock, flags); hash_add_rcu(kpatch_shadow_hash, &shadow->node, (unsigned long)obj); spin_unlock_irqrestore(&kpatch_shadow_lock, flags); return shadow->data; } EXPORT_SYMBOL_GPL(kpatch_shadow_alloc); static void kpatch_shadow_rcu_free(struct rcu_head *head) { struct kpatch_shadow *shadow; shadow = container_of(head, struct kpatch_shadow, rcu_head); if (!shadow_is_inplace(shadow)) kfree(shadow->data); kfree(shadow_var(shadow)); kfree(shadow); } void kpatch_shadow_free(void *obj, char *var) { unsigned long flags; struct kpatch_shadow *shadow; spin_lock_irqsave(&kpatch_shadow_lock, flags); hash_for_each_possible(kpatch_shadow_hash, shadow, node, (unsigned long)obj) { if (shadow->obj == obj && !strcmp(shadow_var(shadow), var)) { hash_del_rcu(&shadow->node); spin_unlock_irqrestore(&kpatch_shadow_lock, flags); call_rcu(&shadow->rcu_head, kpatch_shadow_rcu_free); return; } } spin_unlock_irqrestore(&kpatch_shadow_lock, flags); } EXPORT_SYMBOL_GPL(kpatch_shadow_free); void *kpatch_shadow_get(void *obj, char *var) { struct kpatch_shadow *shadow; rcu_read_lock(); hash_for_each_possible_rcu(kpatch_shadow_hash, shadow, node, (unsigned long)obj) { if (shadow->obj == obj && !strcmp(shadow_var(shadow), var)) { rcu_read_unlock(); if (shadow_is_inplace(shadow)) return &(shadow->data); return shadow->data; } } rcu_read_unlock(); return NULL; } EXPORT_SYMBOL_GPL(kpatch_shadow_get); kpatch-0.5.0/kmod/patch/000077500000000000000000000000001321664017000150175ustar00rootroot00000000000000kpatch-0.5.0/kmod/patch/Makefile000066400000000000000000000013571321664017000164650ustar00rootroot00000000000000KPATCH_BUILD ?= /lib/modules/$(shell uname -r)/build KPATCH_MAKE = $(MAKE) -C $(KPATCH_BUILD) M=$(PWD) LDFLAGS += $(KPATCH_LDFLAGS) # ppc64le kernel modules are expected to compile with the # -mcmodel=large flag. This enables 64-bit relocations # instead of a 32-bit offset from the TOC pointer. PROCESSOR = $(shell uname -m) ifeq ($(PROCESSOR), ppc64le) KBUILD_CFLAGS_MODULE += -mcmodel=large endif obj-m += $(KPATCH_NAME).o $(KPATCH_NAME)-objs += patch-hook.o kpatch.lds output.o all: $(KPATCH_NAME).ko $(KPATCH_NAME).ko: $(KPATCH_MAKE) $(KPATCH_NAME).ko patch-hook.o: patch-hook.c kpatch-patch-hook.c livepatch-patch-hook.c $(KPATCH_MAKE) patch-hook.o clean: $(RM) -Rf .*.o.cmd .*.ko.cmd .tmp_versions *.o *.ko *.mod.c \ Module.symvers kpatch-0.5.0/kmod/patch/kpatch-macros.h000066400000000000000000000074251321664017000177340ustar00rootroot00000000000000#ifndef __KPATCH_MACROS_H_ #define __KPATCH_MACROS_H_ #include #include typedef void (*kpatch_loadcall_t)(void); typedef void (*kpatch_unloadcall_t)(void); struct kpatch_load { kpatch_loadcall_t fn; char *objname; /* filled in by create-diff-object */ }; struct kpatch_unload { kpatch_unloadcall_t fn; char *objname; /* filled in by create-diff-object */ }; /* * KPATCH_IGNORE_SECTION macro * * This macro is for ignoring sections that may change as a side effect of * another change or might be a non-bundlable section; that is one that does * not honor -ffunction-section and create a one-to-one relation from function * symbol to section. */ #define KPATCH_IGNORE_SECTION(_sec) \ char *__UNIQUE_ID(kpatch_ignore_section_) __section(.kpatch.ignore.sections) = _sec; /* * KPATCH_IGNORE_FUNCTION macro * * This macro is for ignoring functions that may change as a side effect of a * change in another function. The WARN class of macros, for example, embed * the line number in an instruction, which will cause the function to be * detected as changed when, in fact, there has been no functional change. */ #define KPATCH_IGNORE_FUNCTION(_fn) \ void *__kpatch_ignore_func_##_fn __section(.kpatch.ignore.functions) = _fn; /* * KPATCH_LOAD_HOOK macro * * The first line only ensures that the hook being registered has the required * function signature. If not, there is compile error on this line. * * The section line declares a struct kpatch_load to be allocated in a new * .kpatch.hook.load section. This kpatch_load_data symbol is later stripped * by create-diff-object so that it can be declared in multiple objects that * are later linked together, avoiding global symbol collision. Since multiple * hooks can be registered, the .kpatch.hook.load section is a table of struct * kpatch_load elements that will be executed in series by the kpatch core * module at load time, assuming the kernel object (module) is currently * loaded; otherwise, the hook is called when module to be patched is loaded * via the module load notifier. */ #define KPATCH_LOAD_HOOK(_fn) \ static inline kpatch_loadcall_t __loadtest(void) { return _fn; } \ struct kpatch_load kpatch_load_data __section(.kpatch.hooks.load) = { \ .fn = _fn, \ .objname = NULL \ }; /* * KPATCH_UNLOAD_HOOK macro * * Same as LOAD hook with s/load/unload/ */ #define KPATCH_UNLOAD_HOOK(_fn) \ static inline kpatch_unloadcall_t __unloadtest(void) { return _fn; } \ struct kpatch_unload kpatch_unload_data __section(.kpatch.hooks.unload) = { \ .fn = _fn, \ .objname = NULL \ }; /* * KPATCH_FORCE_UNSAFE macro * * USE WITH EXTREME CAUTION! * * Allows patch authors to bypass the activeness safety check at patch load * time. Do this ONLY IF 1) the patch application will always/likely fail due * to the function being on the stack of at least one thread at all times and * 2) it is safe for both the original and patched versions of the function to * run concurrently. */ #define KPATCH_FORCE_UNSAFE(_fn) \ void *__kpatch_force_func_##_fn __section(.kpatch.force) = _fn; /* * KPATCH_PRINTK macro * * Use this instead of calling printk to avoid unwanted compiler optimizations * which cause kpatch-build errors. * * The printk function is annotated with the __cold attribute, which tells gcc * that the function is unlikely to be called. A side effect of this is that * code paths containing calls to printk might also be marked cold, leading to * other functions called in those code paths getting moved into .text.unlikely * or being uninlined. * * This macro places printk in its own code path so as not to make the * surrounding code path cold. */ #define KPATCH_PRINTK(_fmt, ...) \ ({ \ if (jiffies) \ printk(_fmt, ## __VA_ARGS__); \ }) #endif /* __KPATCH_MACROS_H_ */ kpatch-0.5.0/kmod/patch/kpatch-patch-hook.c000066400000000000000000000227411321664017000204760ustar00rootroot00000000000000/* * Copyright (C) 2013-2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include #include #include #include "kpatch.h" #include "kpatch-patch.h" static bool replace; module_param(replace, bool, S_IRUGO); MODULE_PARM_DESC(replace, "replace all previously loaded patch modules"); extern struct kpatch_patch_func __kpatch_funcs[], __kpatch_funcs_end[]; extern struct kpatch_patch_dynrela __kpatch_dynrelas[], __kpatch_dynrelas_end[]; extern struct kpatch_patch_hook __kpatch_hooks_load[], __kpatch_hooks_load_end[]; extern struct kpatch_patch_hook __kpatch_hooks_unload[], __kpatch_hooks_unload_end[]; extern unsigned long __kpatch_force_funcs[], __kpatch_force_funcs_end[]; extern char __kpatch_checksum[]; static struct kpatch_module kpmod; static ssize_t patch_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return sprintf(buf, "%d\n", kpmod.enabled); } static ssize_t patch_enabled_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { int ret; unsigned long val; ret = kstrtoul(buf, 10, &val); if (ret) return ret; val = !!val; if (val) ret = kpatch_register(&kpmod, replace); else ret = kpatch_unregister(&kpmod); if (ret) return ret; return count; } static ssize_t patch_checksum_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return snprintf(buf, PAGE_SIZE, "%s\n", __kpatch_checksum); } static struct kobj_attribute patch_enabled_attr = __ATTR(enabled, 0644, patch_enabled_show, patch_enabled_store); static struct kobj_attribute patch_checksum_attr = __ATTR(checksum, 0444, patch_checksum_show, NULL); static struct attribute *patch_attrs[] = { &patch_enabled_attr.attr, &patch_checksum_attr.attr, NULL, }; static void patch_kobj_free(struct kobject *kobj) { } static struct kobj_type patch_ktype = { .release = patch_kobj_free, .sysfs_ops = &kobj_sysfs_ops, .default_attrs = patch_attrs, }; static ssize_t patch_func_old_addr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { struct kpatch_func *func = container_of(kobj, struct kpatch_func, kobj); return sprintf(buf, "0x%lx\n", func->old_addr); } static ssize_t patch_func_new_addr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { struct kpatch_func *func = container_of(kobj, struct kpatch_func, kobj); return sprintf(buf, "0x%lx\n", func->new_addr); } static struct kobj_attribute patch_old_addr_attr = __ATTR(old_addr, S_IRUSR, patch_func_old_addr_show, NULL); static struct kobj_attribute patch_new_addr_attr = __ATTR(new_addr, S_IRUSR, patch_func_new_addr_show, NULL); static struct attribute *patch_func_kobj_attrs[] = { &patch_old_addr_attr.attr, &patch_new_addr_attr.attr, NULL, }; static ssize_t patch_func_kobj_show(struct kobject *kobj, struct attribute *attr, char *buf) { struct kobj_attribute *func_attr = container_of(attr, struct kobj_attribute, attr); return func_attr->show(kobj, func_attr, buf); } static const struct sysfs_ops patch_func_sysfs_ops = { .show = patch_func_kobj_show, }; static void patch_func_kobj_free(struct kobject *kobj) { struct kpatch_func *func = container_of(kobj, struct kpatch_func, kobj); kfree(func); } static struct kobj_type patch_func_ktype = { .release = patch_func_kobj_free, .sysfs_ops = &patch_func_sysfs_ops, .default_attrs = patch_func_kobj_attrs, }; static void patch_object_kobj_free(struct kobject *kobj) { struct kpatch_object *obj = container_of(kobj, struct kpatch_object, kobj); kfree(obj); } static struct kobj_type patch_object_ktype = { .release = patch_object_kobj_free, .sysfs_ops = &kobj_sysfs_ops, }; static struct kpatch_object *patch_find_or_add_object(struct list_head *head, const char *name) { struct kpatch_object *object; int ret; list_for_each_entry(object, head, list) { if (!strcmp(object->name, name)) return object; } object = kzalloc(sizeof(*object), GFP_KERNEL); if (!object) return NULL; object->name = name; INIT_LIST_HEAD(&object->funcs); INIT_LIST_HEAD(&object->dynrelas); INIT_LIST_HEAD(&object->hooks_load); INIT_LIST_HEAD(&object->hooks_unload); list_add_tail(&object->list, head); ret = kobject_init_and_add(&object->kobj, &patch_object_ktype, &kpmod.kobj, "%s", object->name); if (ret) { list_del(&object->list); kfree(object); return NULL; } return object; } static void patch_free_objects(void) { struct kpatch_object *object, *object_safe; struct kpatch_func *func, *func_safe; struct kpatch_dynrela *dynrela, *dynrela_safe; struct kpatch_hook *hook, *hook_safe; list_for_each_entry_safe(object, object_safe, &kpmod.objects, list) { list_for_each_entry_safe(func, func_safe, &object->funcs, list) { list_del(&func->list); kobject_put(&func->kobj); } list_for_each_entry_safe(dynrela, dynrela_safe, &object->dynrelas, list) { list_del(&dynrela->list); kfree(dynrela); } list_for_each_entry_safe(hook, hook_safe, &object->hooks_load, list) { list_del(&hook->list); kfree(hook); } list_for_each_entry_safe(hook, hook_safe, &object->hooks_unload, list) { list_del(&hook->list); kfree(hook); } list_del(&object->list); kobject_put(&object->kobj); } } static int patch_is_func_forced(unsigned long addr) { unsigned long *a; for (a = __kpatch_force_funcs; a < __kpatch_force_funcs_end; a++) if (*a == addr) return 1; return 0; } static int patch_make_funcs_list(struct list_head *objects) { struct kpatch_object *object; struct kpatch_patch_func *p_func; struct kpatch_func *func; int ret; for (p_func = __kpatch_funcs; p_func < __kpatch_funcs_end; p_func++) { object = patch_find_or_add_object(&kpmod.objects, p_func->objname); if (!object) return -ENOMEM; func = kzalloc(sizeof(*func), GFP_KERNEL); if (!func) return -ENOMEM; func->new_addr = p_func->new_addr; func->new_size = p_func->new_size; func->old_size = p_func->old_size; func->sympos = p_func->sympos; func->name = p_func->name; func->force = patch_is_func_forced(func->new_addr); list_add_tail(&func->list, &object->funcs); ret = kobject_init_and_add(&func->kobj, &patch_func_ktype, &object->kobj, "%s,%lu", func->name, func->sympos ? func->sympos : 1); if (ret) return ret; } return 0; } static int patch_make_dynrelas_list(struct list_head *objects) { struct kpatch_object *object; struct kpatch_patch_dynrela *p_dynrela; struct kpatch_dynrela *dynrela; for (p_dynrela = __kpatch_dynrelas; p_dynrela < __kpatch_dynrelas_end; p_dynrela++) { object = patch_find_or_add_object(objects, p_dynrela->objname); if (!object) return -ENOMEM; dynrela = kzalloc(sizeof(*dynrela), GFP_KERNEL); if (!dynrela) return -ENOMEM; dynrela->dest = p_dynrela->dest; dynrela->type = p_dynrela->type; dynrela->sympos = p_dynrela->sympos; dynrela->name = p_dynrela->name; dynrela->external = p_dynrela->external; dynrela->addend = p_dynrela->addend; list_add_tail(&dynrela->list, &object->dynrelas); } return 0; } static int patch_make_hook_lists(struct list_head *objects) { struct kpatch_object *object; struct kpatch_patch_hook *p_hook; struct kpatch_hook *hook; for (p_hook = __kpatch_hooks_load; p_hook < __kpatch_hooks_load_end; p_hook++) { object = patch_find_or_add_object(objects, p_hook->objname); if (!object) return -ENOMEM; hook = kzalloc(sizeof(*hook), GFP_KERNEL); if (!hook) return -ENOMEM; hook->hook = p_hook->hook; list_add_tail(&hook->list, &object->hooks_load); } for (p_hook = __kpatch_hooks_unload; p_hook < __kpatch_hooks_unload_end; p_hook++) { object = patch_find_or_add_object(objects, p_hook->objname); if (!object) return -ENOMEM; hook = kzalloc(sizeof(*hook), GFP_KERNEL); if (!hook) return -ENOMEM; hook->hook = p_hook->hook; list_add_tail(&hook->list, &object->hooks_unload); } return 0; } static int __init patch_init(void) { int ret; ret = kobject_init_and_add(&kpmod.kobj, &patch_ktype, kpatch_root_kobj, "%s", THIS_MODULE->name); if (ret) return -ENOMEM; kpmod.mod = THIS_MODULE; INIT_LIST_HEAD(&kpmod.objects); ret = patch_make_funcs_list(&kpmod.objects); if (ret) goto err_objects; ret = patch_make_dynrelas_list(&kpmod.objects); if (ret) goto err_objects; ret = patch_make_hook_lists(&kpmod.objects); if (ret) goto err_objects; ret = kpatch_register(&kpmod, replace); if (ret) goto err_objects; return 0; err_objects: patch_free_objects(); kobject_put(&kpmod.kobj); return ret; } static void __exit patch_exit(void) { WARN_ON(kpmod.enabled); patch_free_objects(); kobject_put(&kpmod.kobj); } module_init(patch_init); module_exit(patch_exit); MODULE_LICENSE("GPL"); kpatch-0.5.0/kmod/patch/kpatch-patch.h000066400000000000000000000024431321664017000175420ustar00rootroot00000000000000/* * kpatch-patch.h * * Copyright (C) 2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, see . * * Contains the structs used for the patch module special sections */ #ifndef _KPATCH_PATCH_H_ #define _KPATCH_PATCH_H_ struct kpatch_patch_func { unsigned long new_addr; unsigned long new_size; unsigned long old_addr; unsigned long old_size; unsigned long sympos; char *name; char *objname; }; struct kpatch_patch_dynrela { unsigned long dest; unsigned long src; unsigned long type; unsigned long sympos; char *name; char *objname; int external; int addend; }; struct kpatch_patch_hook { void (*hook)(void); char *objname; }; #endif /* _KPATCH_PATCH_H_ */ kpatch-0.5.0/kmod/patch/kpatch.h000077700000000000000000000000001321664017000212232../core/kpatch.hustar00rootroot00000000000000kpatch-0.5.0/kmod/patch/kpatch.lds.S000066400000000000000000000020761321664017000172030ustar00rootroot00000000000000__kpatch_funcs = ADDR(.kpatch.funcs); __kpatch_funcs_end = ADDR(.kpatch.funcs) + SIZEOF(.kpatch.funcs); #ifdef __KPATCH_MODULE__ __kpatch_dynrelas = ADDR(.kpatch.dynrelas); __kpatch_dynrelas_end = ADDR(.kpatch.dynrelas) + SIZEOF(.kpatch.dynrelas); __kpatch_checksum = ADDR(.kpatch.checksum); #endif SECTIONS { .kpatch.hooks.load : { __kpatch_hooks_load = . ; *(.kpatch.hooks.load) __kpatch_hooks_load_end = . ; /* * Pad the end of the section with zeros in case the section is empty. * This prevents the kernel from discarding the section at module * load time. __kpatch_hooks_load_end will still point to the end of * the section before the padding. If the .kpatch.hooks.load section * is empty, __kpatch_hooks_load equals __kpatch_hooks_load_end. */ QUAD(0); } .kpatch.hooks.unload : { __kpatch_hooks_unload = . ; *(.kpatch.hooks.unload) __kpatch_hooks_unload_end = . ; QUAD(0); } .kpatch.force : { __kpatch_force_funcs = . ; *(.kpatch.force) __kpatch_force_funcs_end = . ; QUAD(0); } } kpatch-0.5.0/kmod/patch/livepatch-patch-hook.c000066400000000000000000000211731321664017000212010ustar00rootroot00000000000000/* * Copyright (C) 2013-2014 Josh Poimboeuf * Copyright (C) 2014 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include #include #include #include #include #include #include "kpatch-patch.h" #ifndef UTS_UBUNTU_RELEASE_ABI #define UTS_UBUNTU_RELEASE_ABI 0 #endif #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 7, 0) || \ defined(RHEL_RELEASE_CODE) #define HAVE_ELF_RELOCS #endif #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) || \ (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 4, 0) && \ UTS_UBUNTU_RELEASE_ABI >= 7) || \ defined(RHEL_RELEASE_CODE) #define HAVE_SYMPOS #endif #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 12, 0) || \ defined(RHEL_RELEASE_CODE) #define HAVE_IMMEDIATE #endif /* * There are quite a few similar structures at play in this file: * - livepatch.h structs prefixed with klp_* * - kpatch-patch.h structs prefixed with kpatch_patch_* * - local scaffolding structs prefixed with patch_* * * The naming of the struct variables follows this convention: * - livepatch struct being with "l" (e.g. lfunc) * - kpatch_patch structs being with "k" (e.g. kfunc) * - local scaffolding structs have no prefix (e.g. func) * * The program reads in kpatch_patch structures, arranges them into the * scaffold structures, then creates a livepatch structure suitable for * registration with the livepatch kernel API. The scaffold structs only * exist to allow the construction of the klp_patch struct. Once that is * done, the scaffold structs are no longer needed. */ struct klp_patch *lpatch; static LIST_HEAD(patch_objects); static int patch_objects_nr; struct patch_object { struct list_head list; struct list_head funcs; struct list_head relocs; const char *name; int funcs_nr, relocs_nr; }; struct patch_func { struct list_head list; struct kpatch_patch_func *kfunc; }; struct patch_reloc { struct list_head list; struct kpatch_patch_dynrela *kdynrela; }; static struct patch_object *patch_alloc_new_object(const char *name) { struct patch_object *object; object = kzalloc(sizeof(*object), GFP_KERNEL); if (!object) return NULL; INIT_LIST_HEAD(&object->funcs); #ifndef HAVE_ELF_RELOCS INIT_LIST_HEAD(&object->relocs); #endif if (strcmp(name, "vmlinux")) object->name = name; list_add(&object->list, &patch_objects); patch_objects_nr++; return object; } static struct patch_object *patch_find_object_by_name(const char *name) { struct patch_object *object; list_for_each_entry(object, &patch_objects, list) if ((!strcmp(name, "vmlinux") && !object->name) || (object->name && !strcmp(object->name, name))) return object; return patch_alloc_new_object(name); } static int patch_add_func_to_object(struct kpatch_patch_func *kfunc) { struct patch_func *func; struct patch_object *object; func = kzalloc(sizeof(*func), GFP_KERNEL); if (!func) return -ENOMEM; INIT_LIST_HEAD(&func->list); func->kfunc = kfunc; object = patch_find_object_by_name(kfunc->objname); if (!object) { kfree(func); return -ENOMEM; } list_add(&func->list, &object->funcs); object->funcs_nr++; return 0; } #ifndef HAVE_ELF_RELOCS static int patch_add_reloc_to_object(struct kpatch_patch_dynrela *kdynrela) { struct patch_reloc *reloc; struct patch_object *object; reloc = kzalloc(sizeof(*reloc), GFP_KERNEL); if (!reloc) return -ENOMEM; INIT_LIST_HEAD(&reloc->list); reloc->kdynrela = kdynrela; object = patch_find_object_by_name(kdynrela->objname); if (!object) { kfree(reloc); return -ENOMEM; } list_add(&reloc->list, &object->relocs); object->relocs_nr++; return 0; } #endif static void patch_free_scaffold(void) { struct patch_func *func, *safefunc; struct patch_object *object, *safeobject; #ifndef HAVE_ELF_RELOCS struct patch_reloc *reloc, *safereloc; #endif list_for_each_entry_safe(object, safeobject, &patch_objects, list) { list_for_each_entry_safe(func, safefunc, &object->funcs, list) { list_del(&func->list); kfree(func); } #ifndef HAVE_ELF_RELOCS list_for_each_entry_safe(reloc, safereloc, &object->relocs, list) { list_del(&reloc->list); kfree(reloc); } #endif list_del(&object->list); kfree(object); } } static void patch_free_livepatch(struct klp_patch *patch) { struct klp_object *object; if (patch) { for (object = patch->objs; object && object->funcs; object++) { if (object->funcs) kfree(object->funcs); #ifndef HAVE_ELF_RELOCS if (object->relocs) kfree(object->relocs); #endif } if (patch->objs) kfree(patch->objs); kfree(patch); } } extern struct kpatch_patch_func __kpatch_funcs[], __kpatch_funcs_end[]; #ifndef HAVE_ELF_RELOCS extern struct kpatch_patch_dynrela __kpatch_dynrelas[], __kpatch_dynrelas_end[]; #endif static int __init patch_init(void) { struct kpatch_patch_func *kfunc; struct klp_object *lobjects, *lobject; struct klp_func *lfuncs, *lfunc; struct patch_object *object; struct patch_func *func; int ret = 0, i, j; #ifndef HAVE_ELF_RELOCS struct kpatch_patch_dynrela *kdynrela; struct patch_reloc *reloc; struct klp_reloc *lrelocs, *lreloc; #endif /* organize functions and relocs by object in scaffold */ for (kfunc = __kpatch_funcs; kfunc != __kpatch_funcs_end; kfunc++) { ret = patch_add_func_to_object(kfunc); if (ret) goto out; } #ifndef HAVE_ELF_RELOCS for (kdynrela = __kpatch_dynrelas; kdynrela != __kpatch_dynrelas_end; kdynrela++) { ret = patch_add_reloc_to_object(kdynrela); if (ret) goto out; } #endif /* past this point, only possible return code is -ENOMEM */ ret = -ENOMEM; /* allocate and fill livepatch structures */ lpatch = kzalloc(sizeof(*lpatch), GFP_KERNEL); if (!lpatch) goto out; lobjects = kzalloc(sizeof(*lobjects) * (patch_objects_nr+1), GFP_KERNEL); if (!lobjects) goto out; lpatch->mod = THIS_MODULE; lpatch->objs = lobjects; #if defined(__powerpc__) && defined(HAVE_IMMEDIATE) lpatch->immediate = true; #endif i = 0; list_for_each_entry(object, &patch_objects, list) { lobject = &lobjects[i]; lobject->name = object->name; lfuncs = kzalloc(sizeof(struct klp_func) * (object->funcs_nr+1), GFP_KERNEL); if (!lfuncs) goto out; lobject->funcs = lfuncs; j = 0; list_for_each_entry(func, &object->funcs, list) { lfunc = &lfuncs[j]; lfunc->old_name = func->kfunc->name; lfunc->new_func = (void *)func->kfunc->new_addr; #ifdef HAVE_SYMPOS lfunc->old_sympos = func->kfunc->sympos; #else lfunc->old_addr = func->kfunc->old_addr; #endif j++; } #ifndef HAVE_ELF_RELOCS lrelocs = kzalloc(sizeof(struct klp_reloc) * (object->relocs_nr+1), GFP_KERNEL); if (!lrelocs) goto out; lobject->relocs = lrelocs; j = 0; list_for_each_entry(reloc, &object->relocs, list) { lreloc = &lrelocs[j]; lreloc->loc = reloc->kdynrela->dest; #ifdef HAVE_SYMPOS lreloc->sympos = reloc->kdynrela->sympos; #else lreloc->val = reloc->kdynrela->src; #endif /* HAVE_SYMPOS */ lreloc->type = reloc->kdynrela->type; lreloc->name = reloc->kdynrela->name; lreloc->addend = reloc->kdynrela->addend; lreloc->external = reloc->kdynrela->external; j++; } #endif /* HAVE_ELF_RELOCS */ i++; } /* * Once the patch structure that the live patching API expects * has been built, we can release the scaffold structure. */ patch_free_scaffold(); ret = klp_register_patch(lpatch); if (ret) { patch_free_livepatch(lpatch); return ret; } ret = klp_enable_patch(lpatch); if (ret) { WARN_ON(klp_unregister_patch(lpatch)); patch_free_livepatch(lpatch); return ret; } return 0; out: patch_free_livepatch(lpatch); patch_free_scaffold(); return ret; } static void __exit patch_exit(void) { WARN_ON(klp_unregister_patch(lpatch)); } module_init(patch_init); module_exit(patch_exit); MODULE_LICENSE("GPL"); MODULE_INFO(livepatch, "Y"); kpatch-0.5.0/kmod/patch/patch-hook.c000066400000000000000000000016071321664017000172240ustar00rootroot00000000000000/* * Copyright (C) 2015 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #if IS_ENABLED(CONFIG_LIVEPATCH) #include "livepatch-patch-hook.c" #else #include "kpatch-patch-hook.c" #endif kpatch-0.5.0/kpatch-build/000077500000000000000000000000001321664017000153355ustar00rootroot00000000000000kpatch-0.5.0/kpatch-build/Makefile000066400000000000000000000024211321664017000167740ustar00rootroot00000000000000include ../Makefile.inc CFLAGS += -MMD -MP -I../kmod/patch -Iinsn -Wall -g -Werror LDLIBS = -lelf TARGETS = create-diff-object create-klp-module create-kpatch-module SOURCES = create-diff-object.c kpatch-elf.c \ create-klp-module.c \ create-kpatch-module.c \ create-kpatch-module.c lookup.c ifeq ($(ARCH),x86_64) SOURCES += insn/insn.c insn/inat.c INSN = insn/insn.o insn/inat.o else ifeq ($(ARCH),ppc64le) SOURCES += gcc-plugins/ppc64le-plugin.c PLUGIN = gcc-plugins/ppc64le-plugin.so TARGETS += $(PLUGIN) GCC_PLUGINS_DIR := $(shell gcc -print-file-name=plugin) PLUGIN_CFLAGS = -shared $(CFLAGS) -I$(GCC_PLUGINS_DIR)/include \ -Igcc-plugins -fPIC -fno-rtti -O2 -Wall endif all: $(TARGETS) -include $(SOURCES:.c=.d) create-diff-object: create-diff-object.o kpatch-elf.o lookup.o $(INSN) create-klp-module: create-klp-module.o kpatch-elf.o create-kpatch-module: create-kpatch-module.o kpatch-elf.o $(PLUGIN): gcc-plugins/ppc64le-plugin.c g++ $(PLUGIN_CFLAGS) $< -o $@ install: all $(INSTALL) -d $(LIBEXECDIR) $(INSTALL) $(TARGETS) kpatch-gcc $(PLUGIN) $(LIBEXECDIR) $(INSTALL) -d $(BINDIR) $(INSTALL) kpatch-build $(BINDIR) uninstall: $(RM) -R $(LIBEXECDIR) $(RM) $(BINDIR)/kpatch-build clean: $(RM) $(TARGETS) *.{o,d} insn/*.{o,d} gcc-plugins/*.{so,d} kpatch-0.5.0/kpatch-build/create-diff-object.c000066400000000000000000002375261321664017000211350ustar00rootroot00000000000000/* * create-diff-object.c * * Copyright (C) 2014 Seth Jennings * Copyright (C) 2013-2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ /* * This file contains the heart of the ELF object differencing engine. * * The tool takes two ELF objects from two versions of the same source * file; a "base" object and a "patched" object. These object need to have * been compiled with the -ffunction-sections and -fdata-sections GCC options. * * The tool compares the objects at a section level to determine what * sections have changed. Once a list of changed sections has been generated, * various rules are applied to determine any object local sections that * are dependencies of the changed section and also need to be included in * the output object. */ #include #include #include #include #include #include #include #include #include #include #include #include "list.h" #include "lookup.h" #include "asm/insn.h" #include "kpatch-patch.h" #include "kpatch-elf.h" #include "kpatch-intermediate.h" #define DIFF_FATAL(format, ...) \ ({ \ fprintf(stderr, "ERROR: %s: " format "\n", childobj, ##__VA_ARGS__); \ error(2, 0, "unreconcilable difference"); \ }) #ifdef __powerpc__ #define ABSOLUTE_RELA_TYPE R_PPC64_ADDR64 #else #define ABSOLUTE_RELA_TYPE R_X86_64_64 #endif char *childobj; enum loglevel loglevel = NORMAL; /******************* * Data structures * ****************/ struct special_section { char *name; int (*group_size)(struct kpatch_elf *kelf, int offset); }; /************* * Functions * **********/ static int is_bundleable(struct symbol *sym) { if (sym->type == STT_FUNC && !strncmp(sym->sec->name, ".text.",6) && !strcmp(sym->sec->name + 6, sym->name)) return 1; if (sym->type == STT_FUNC && !strncmp(sym->sec->name, ".text.unlikely.",15) && !strcmp(sym->sec->name + 15, sym->name)) return 1; if (sym->type == STT_OBJECT && !strncmp(sym->sec->name, ".data.",6) && !strcmp(sym->sec->name + 6, sym->name)) return 1; if (sym->type == STT_OBJECT && !strncmp(sym->sec->name, ".rodata.",8) && !strcmp(sym->sec->name + 8, sym->name)) return 1; if (sym->type == STT_OBJECT && !strncmp(sym->sec->name, ".bss.",5) && !strcmp(sym->sec->name + 5, sym->name)) return 1; return 0; } #ifdef __powerpc__ /* Symbol st_others value for powerpc */ #define STO_PPC64_LOCAL_BIT 5 #define STO_PPC64_LOCAL_MASK (7 << STO_PPC64_LOCAL_BIT) #define PPC64_LOCAL_ENTRY_OFFSET(other) \ (((1 << (((other) & STO_PPC64_LOCAL_MASK) >> STO_PPC64_LOCAL_BIT)) >> 2) << 2) /* * On ppc64le, the function prologue generated by GCC 6+ has the sequence: * * .globl my_func * .type my_func, @function * .quad .TOC.-my_func * my_func: * .reloc ., R_PPC64_ENTRY ; optional * ld r2,-8(r12) * add r2,r2,r12 * .localentry my_func, .-my_func * * my_func is the global entry point, which, when called, sets up the TOC. * .localentry is the local entry point, for calls to the function from within * the object file. The local entry point is 8 bytes after the global entry * point. */ static int is_gcc6_localentry_bundled_sym(struct symbol *sym) { return (PPC64_LOCAL_ENTRY_OFFSET(sym->sym.st_other) && sym->sym.st_value == 8); } #else static int is_gcc6_localentry_bundled_sym(struct symbol *sym) { return 0; } #endif /* * When compiling with -ffunction-sections and -fdata-sections, almost every * symbol gets its own dedicated section. We call such symbols "bundled" * symbols. They're indicated by "sym->sec->sym == sym". */ static void kpatch_bundle_symbols(struct kpatch_elf *kelf) { struct symbol *sym; list_for_each_entry(sym, &kelf->symbols, list) { if (is_bundleable(sym)) { if (sym->sym.st_value != 0 && !is_gcc6_localentry_bundled_sym(sym)) { ERROR("symbol %s at offset %lu within section %s, expected 0", sym->name, sym->sym.st_value, sym->sec->name); } sym->sec->sym = sym; } } } /* * This function detects whether the given symbol is a "special" static local * variable (for lack of a better term). * * Special static local variables should never be correlated and should always * be included if they are referenced by an included function. */ static int is_special_static(struct symbol *sym) { static char *prefixes[] = { "__key.", "__warned.", "descriptor.", "__func__.", "_rs.", "CSWTCH.", NULL, }; char **prefix; if (!sym) return 0; if (sym->type == STT_SECTION) { /* __verbose section contains the descriptor variables */ if (!strcmp(sym->name, "__verbose")) return 1; /* otherwise make sure section is bundled */ if (!sym->sec->sym) return 0; /* use bundled object/function symbol for matching */ sym = sym->sec->sym; } if (sym->type != STT_OBJECT || sym->bind != STB_LOCAL) return 0; for (prefix = prefixes; *prefix; prefix++) if (!strncmp(sym->name, *prefix, strlen(*prefix))) return 1; return 0; } /* * This is like strcmp, but for gcc-mangled symbols. It skips the comparison * of any substring which consists of '.' followed by any number of digits. */ static int kpatch_mangled_strcmp(char *s1, char *s2) { while (*s1 == *s2) { if (!*s1) return 0; if (*s1 == '.' && isdigit(s1[1])) { if (!isdigit(s2[1])) return 1; while (isdigit(*++s1)) ; while (isdigit(*++s2)) ; } else { s1++; s2++; } } return 1; } static int rela_equal(struct rela *rela1, struct rela *rela2) { #ifdef __powerpc__ struct section *toc_relasec1, *toc_relasec2; struct rela *r_toc_relasec1, *r_toc_relasec2; #endif if (rela1->type != rela2->type || rela1->offset != rela2->offset) return 0; if (rela1->string) return rela2->string && !strcmp(rela1->string, rela2->string); #ifdef __powerpc__ /* * Relocation section '.rela.toc' at offset 0x91f0 contains 122 entries: * Offset Info Type Symbol's Value Symbol's Name + Addend * [...] * 0000000000000090 0000001c00000026 R_PPC64_ADDR64 0000000000000000 .rodata.__dev_remove_pack.str1.8 + 0 * 0000000000000098 0000001a00000026 R_PPC64_ADDR64 0000000000000000 .rodata.__dev_getfirstbyhwtype.str1.8 + 0 * 00000000000000a0 0000001a00000026 R_PPC64_ADDR64 0000000000000000 .rodata.__dev_getfirstbyhwtype.str1.8 + 10 * 00000000000000a8 000000cb00000026 R_PPC64_ADDR64 0000000000000000 dev_base_lock + 0 * * Relocation section '.rela.text.netdev_master_upper_dev_get' at offset 0xe38 contains 10 entries: * Offset Info Type Symbol's Value Symbol's Name + Addend * [...] * 00000000000000a0 0000004e00000032 R_PPC64_TOC16_HA 0000000000000000 .toc + 98 * 00000000000000a8 0000004e00000040 R_PPC64_TOC16_LO_DS 0000000000000000 .toc + 98 * 00000000000000ac 0000004e00000032 R_PPC64_TOC16_HA 0000000000000000 .toc + a0 * 00000000000000b0 0000004e00000040 R_PPC64_TOC16_LO_DS 0000000000000000 .toc + a0 * 00000000000000b4 000000b90000000a R_PPC64_REL24 0000000000000000 printk + 0 * 00000000000000bc 000000a10000000a R_PPC64_REL24 0000000000000000 dump_stack + 0 * * With -mcmodel=large on ppc64le, GCC might generate entries in the .toc * section for relocation symbol references. The .toc offsets may change * between the original and patched .o, so comparing ".toc + offset" isn't * right. Compare the .toc-based symbols by reading the corresponding relas * from the .toc section. */ if (!strcmp(rela1->sym->name, ".toc") && !strcmp(rela2->sym->name, ".toc")) { toc_relasec1 = rela1->sym->sec->rela; if (!toc_relasec1) ERROR("cannot find .rela.toc"); r_toc_relasec1 = find_rela_by_offset(toc_relasec1, rela1->addend); if (!r_toc_relasec1) ERROR(".toc entry not found %s + %x", rela1->sym->name, rela1->addend); toc_relasec2 = rela2->sym->sec->rela; if (!toc_relasec2) ERROR("cannot find .rela.toc"); r_toc_relasec2 = find_rela_by_offset(toc_relasec2, rela2->addend); if (!r_toc_relasec2) ERROR(".toc entry not found %s + %x", rela2->sym->name, rela2->addend); if (r_toc_relasec1->string) return r_toc_relasec2->string && !strcmp(r_toc_relasec1->string, r_toc_relasec2->string); if ((r_toc_relasec1->addend != r_toc_relasec2->addend)) return 0; if (is_special_static(r_toc_relasec1->sym)) return !kpatch_mangled_strcmp(r_toc_relasec1->sym->name, r_toc_relasec2->sym->name); return !strcmp(r_toc_relasec1->sym->name, r_toc_relasec2->sym->name); } #endif if (rela1->addend != rela2->addend) return 0; if (is_special_static(rela1->sym)) return !kpatch_mangled_strcmp(rela1->sym->name, rela2->sym->name); return !strcmp(rela1->sym->name, rela2->sym->name); } static void kpatch_compare_correlated_rela_section(struct section *sec) { struct rela *rela1, *rela2 = NULL; /* * On ppc64le, don't compare the .rela.toc section. The .toc and * .rela.toc sections are included as standard elements. */ if (!strcmp(sec->name, ".rela.toc")) { sec->status = SAME; return; } rela2 = list_entry(sec->twin->relas.next, struct rela, list); list_for_each_entry(rela1, &sec->relas, list) { if (rela_equal(rela1, rela2)) { rela2 = list_entry(rela2->list.next, struct rela, list); continue; } sec->status = CHANGED; return; } sec->status = SAME; } static void kpatch_compare_correlated_nonrela_section(struct section *sec) { struct section *sec1 = sec, *sec2 = sec->twin; if (sec1->sh.sh_type != SHT_NOBITS && memcmp(sec1->data->d_buf, sec2->data->d_buf, sec1->data->d_size)) sec->status = CHANGED; else sec->status = SAME; } static void kpatch_compare_correlated_section(struct section *sec) { struct section *sec1 = sec, *sec2 = sec->twin; /* Compare section headers (must match or fatal) */ if (sec1->sh.sh_type != sec2->sh.sh_type || sec1->sh.sh_flags != sec2->sh.sh_flags || sec1->sh.sh_addralign != sec2->sh.sh_addralign || sec1->sh.sh_entsize != sec2->sh.sh_entsize) DIFF_FATAL("%s section header details differ", sec1->name); /* Short circuit for mcount sections, we rebuild regardless */ if (!strcmp(sec->name, ".rela__mcount_loc") || !strcmp(sec->name, "__mcount_loc")) { sec->status = SAME; goto out; } if (sec1->sh.sh_size != sec2->sh.sh_size || sec1->data->d_size != sec2->data->d_size) { sec->status = CHANGED; goto out; } if (is_rela_section(sec)) kpatch_compare_correlated_rela_section(sec); else kpatch_compare_correlated_nonrela_section(sec); out: if (sec->status == CHANGED) log_debug("section %s has changed\n", sec->name); } #ifdef __x86_64__ /* * Determine if a section has changed only due to a WARN* or might_sleep * macro call's embedding of the line number into an instruction operand. * * Warning: Hackery lies herein. It's hopefully justified by the fact that * this issue is very common. * * Example WARN*: * * 938: be 70 00 00 00 mov $0x70,%esi * 93d: 48 c7 c7 00 00 00 00 mov $0x0,%rdi * 940: R_X86_64_32S .rodata.tcp_conn_request.str1.8+0x88 * 944: c6 05 00 00 00 00 01 movb $0x1,0x0(%rip) # 94b * 946: R_X86_64_PC32 .data.unlikely-0x1 * 94b: e8 00 00 00 00 callq 950 * 94c: R_X86_64_PC32 warn_slowpath_null-0x4 * * Example might_sleep: * * 50f: be f7 01 00 00 mov $0x1f7,%esi * 514: 48 c7 c7 00 00 00 00 mov $0x0,%rdi * 517: R_X86_64_32S .rodata.do_select.str1.8+0x98 * 51b: e8 00 00 00 00 callq 520 * 51c: R_X86_64_PC32 ___might_sleep-0x4 * * The pattern which applies to all cases: * 1) immediate move of the line number to %esi * 2) (optional) string section rela * 3) (optional) __warned.xxxxx static local rela * 4) warn_slowpath_* or __might_sleep or ___might_sleep rela */ static int kpatch_line_macro_change_only(struct section *sec) { struct insn insn1, insn2; unsigned long start1, start2, size, offset, length; struct rela *rela; int lineonly = 0, found; if (sec->status != CHANGED || is_rela_section(sec) || !is_text_section(sec) || sec->sh.sh_size != sec->twin->sh.sh_size || !sec->rela || sec->rela->status != SAME) return 0; start1 = (unsigned long)sec->twin->data->d_buf; start2 = (unsigned long)sec->data->d_buf; size = sec->sh.sh_size; for (offset = 0; offset < size; offset += length) { insn_init(&insn1, (void *)(start1 + offset), 1); insn_init(&insn2, (void *)(start2 + offset), 1); insn_get_length(&insn1); insn_get_length(&insn2); length = insn1.length; if (!insn1.length || !insn2.length) ERROR("can't decode instruction in section %s at offset 0x%lx", sec->name, offset); if (insn1.length != insn2.length) return 0; if (!memcmp((void *)start1 + offset, (void *)start2 + offset, length)) continue; /* verify it's a mov immediate to %esi */ insn_get_opcode(&insn1); insn_get_opcode(&insn2); if (insn1.opcode.value != 0xbe || insn2.opcode.value != 0xbe) return 0; /* * Verify zero or more string relas followed by a * warn_slowpath_* or __might_sleep or ___might_sleep rela. */ found = 0; list_for_each_entry(rela, &sec->rela->relas, list) { if (rela->offset < offset + length) continue; if (rela->string) continue; if (!strncmp(rela->sym->name, "__warned.", 9)) continue; if (!strncmp(rela->sym->name, "warn_slowpath_", 14) || (!strcmp(rela->sym->name, "__might_sleep")) || (!strcmp(rela->sym->name, "___might_sleep"))) { found = 1; break; } return 0; } if (!found) return 0; lineonly = 1; } if (!lineonly) ERROR("no instruction changes detected for changed section %s", sec->name); return 1; } #else static int kpatch_line_macro_change_only(struct section *sec) { return 0; } #endif static void kpatch_compare_sections(struct list_head *seclist) { struct section *sec; /* compare all sections */ list_for_each_entry(sec, seclist, list) { if (sec->twin) kpatch_compare_correlated_section(sec); else sec->status = NEW; } /* exclude WARN-only, might_sleep changes */ list_for_each_entry(sec, seclist, list) { if (kpatch_line_macro_change_only(sec)) { log_debug("reverting macro / line number section %s status to SAME\n", sec->name); sec->status = SAME; } } /* sync symbol status */ list_for_each_entry(sec, seclist, list) { if (is_rela_section(sec)) { if (sec->base->sym && sec->base->sym->status != CHANGED) sec->base->sym->status = sec->status; } else { if (sec->sym && sec->sym->status != CHANGED) sec->sym->status = sec->status; } } } static void kpatch_compare_correlated_symbol(struct symbol *sym) { struct symbol *sym1 = sym, *sym2 = sym->twin; if (sym1->sym.st_info != sym2->sym.st_info || (sym1->sec && !sym2->sec) || (sym2->sec && !sym1->sec)) DIFF_FATAL("symbol info mismatch: %s", sym1->name); /* * If two symbols are correlated but their sections are not, then the * symbol has changed sections. This is only allowed if the symbol is * moving out of an ignored section. */ if (sym1->sec && sym2->sec && sym1->sec->twin != sym2->sec) { if (sym2->sec->twin && sym2->sec->twin->ignore) sym->status = CHANGED; else DIFF_FATAL("symbol changed sections: %s", sym1->name); } if (sym1->type == STT_OBJECT && sym1->sym.st_size != sym2->sym.st_size) DIFF_FATAL("object size mismatch: %s", sym1->name); if (sym1->sym.st_shndx == SHN_UNDEF || sym1->sym.st_shndx == SHN_ABS) sym1->status = SAME; /* * The status of LOCAL symbols is dependent on the status of their * matching section and is set during section comparison. */ } static void kpatch_compare_symbols(struct list_head *symlist) { struct symbol *sym; list_for_each_entry(sym, symlist, list) { if (sym->twin) kpatch_compare_correlated_symbol(sym); else sym->status = NEW; log_debug("symbol %s is %s\n", sym->name, status_str(sym->status)); } } static void kpatch_correlate_sections(struct list_head *seclist1, struct list_head *seclist2) { struct section *sec1, *sec2; list_for_each_entry(sec1, seclist1, list) { list_for_each_entry(sec2, seclist2, list) { if (strcmp(sec1->name, sec2->name)) continue; if (is_special_static(is_rela_section(sec1) ? sec1->base->secsym : sec1->secsym)) continue; /* * Group sections must match exactly to be correlated. * Changed group sections are currently not supported. */ if (sec1->sh.sh_type == SHT_GROUP) { if (sec1->data->d_size != sec2->data->d_size) continue; if (memcmp(sec1->data->d_buf, sec2->data->d_buf, sec1->data->d_size)) continue; } sec1->twin = sec2; sec2->twin = sec1; /* set initial status, might change */ sec1->status = sec2->status = SAME; break; } } } static void kpatch_correlate_symbols(struct list_head *symlist1, struct list_head *symlist2) { struct symbol *sym1, *sym2; list_for_each_entry(sym1, symlist1, list) { list_for_each_entry(sym2, symlist2, list) { if (strcmp(sym1->name, sym2->name) || sym1->type != sym2->type) continue; if (is_special_static(sym1)) continue; /* * The .LCx symbols point to strings, usually used for * the bug table. Don't correlate and compare the * symbols themselves, because the suffix number might * change. * * If the symbol is used by the bug table (usual case), * it may get pulled in by * kpatch_regenerate_special_section(). * * If the symbol is used outside of the bug table (not * sure if this actually happens anywhere), any string * changes will be detected elsewhere in rela_equal(). */ if (sym1->type == STT_NOTYPE && !strncmp(sym1->name, ".LC", 3)) continue; /* group section symbols must have correlated sections */ if (sym1->sec && sym1->sec->sh.sh_type == SHT_GROUP && sym1->sec->twin != sym2->sec) continue; sym1->twin = sym2; sym2->twin = sym1; /* set initial status, might change */ sym1->status = sym2->status = SAME; break; } } } static void kpatch_compare_elf_headers(Elf *elf1, Elf *elf2) { GElf_Ehdr eh1, eh2; if (!gelf_getehdr(elf1, &eh1)) ERROR("gelf_getehdr"); if (!gelf_getehdr(elf2, &eh2)) ERROR("gelf_getehdr"); if (memcmp(eh1.e_ident, eh2.e_ident, EI_NIDENT) || eh1.e_type != eh2.e_type || eh1.e_machine != eh2.e_machine || eh1.e_version != eh2.e_version || eh1.e_entry != eh2.e_entry || eh1.e_phoff != eh2.e_phoff || eh1.e_flags != eh2.e_flags || eh1.e_ehsize != eh2.e_ehsize || eh1.e_phentsize != eh2.e_phentsize || eh1.e_shentsize != eh2.e_shentsize) DIFF_FATAL("ELF headers differ"); } static void kpatch_check_program_headers(Elf *elf) { size_t ph_nr; if (elf_getphdrnum(elf, &ph_nr)) ERROR("elf_getphdrnum"); if (ph_nr != 0) DIFF_FATAL("ELF contains program header"); } static void kpatch_mark_grouped_sections(struct kpatch_elf *kelf) { struct section *groupsec, *sec; unsigned int *data, *end; list_for_each_entry(groupsec, &kelf->sections, list) { if (groupsec->sh.sh_type != SHT_GROUP) continue; data = groupsec->data->d_buf; end = groupsec->data->d_buf + groupsec->data->d_size; data++; /* skip first flag word (e.g. GRP_COMDAT) */ while (data < end) { sec = find_section_by_index(&kelf->sections, *data); if (!sec) ERROR("group section not found"); sec->grouped = 1; log_debug("marking section %s (%d) as grouped\n", sec->name, sec->index); data++; } } } /* * When gcc makes compiler optimizations which affect a function's calling * interface, it mangles the function's name. For example, sysctl_print_dir is * renamed to sysctl_print_dir.isra.2. The problem is that the trailing number * is chosen arbitrarily, and the patched version of the function may end up * with a different trailing number. Rename any mangled patched functions to * match their base counterparts. */ static void kpatch_rename_mangled_functions(struct kpatch_elf *base, struct kpatch_elf *patched) { struct symbol *sym, *basesym; char name[256], *origname; struct section *sec, *basesec; int found; list_for_each_entry(sym, &patched->symbols, list) { if (sym->type != STT_FUNC) continue; if (!strstr(sym->name, ".isra.") && !strstr(sym->name, ".constprop.") && !strstr(sym->name, ".part.")) continue; found = 0; list_for_each_entry(basesym, &base->symbols, list) { if (!kpatch_mangled_strcmp(basesym->name, sym->name)) { found = 1; break; } } if (!found) continue; if (!strcmp(sym->name, basesym->name)) continue; log_debug("renaming %s to %s\n", sym->name, basesym->name); origname = sym->name; sym->name = strdup(basesym->name); if (sym != sym->sec->sym) continue; sym->sec->name = strdup(basesym->sec->name); if (sym->sec->rela) sym->sec->rela->name = strdup(basesym->sec->rela->name); /* * When function foo.isra.1 has a switch statement, it might * have a corresponding bundled .rodata.foo.isra.1 section (in * addition to .text.foo.isra.1 which we renamed above). */ sprintf(name, ".rodata.%s", origname); sec = find_section_by_name(&patched->sections, name); if (!sec) continue; sprintf(name, ".rodata.%s", basesym->name); basesec = find_section_by_name(&base->sections, name); if (!basesec) continue; sec->name = strdup(basesec->name); sec->secsym->name = sec->name; if (sec->rela) sec->rela->name = strdup(basesec->rela->name); } } static char *kpatch_section_function_name(struct section *sec) { if (is_rela_section(sec)) sec = sec->base; return sec->sym ? sec->sym->name : sec->name; } /* * Given a static local variable symbol and a rela section which references it * in the base object, find a corresponding usage of a similarly named symbol * in the patched object. */ static struct symbol *kpatch_find_static_twin(struct section *sec, struct symbol *sym) { struct rela *rela; if (!sec->twin) return NULL; /* find the patched object's corresponding variable */ list_for_each_entry(rela, &sec->twin->relas, list) { if (rela->sym->twin) continue; if (kpatch_mangled_strcmp(rela->sym->name, sym->name)) continue; return rela->sym; } return NULL; } static int kpatch_is_normal_static_local(struct symbol *sym) { if (sym->type != STT_OBJECT || sym->bind != STB_LOCAL) return 0; if (!strchr(sym->name, '.')) return 0; if (is_special_static(sym)) return 0; return 1; } /* * gcc renames static local variables by appending a period and a number. For * example, __foo could be renamed to __foo.31452. Unfortunately this number * can arbitrarily change. Correlate them by comparing which functions * reference them, and rename the patched symbols to match the base symbol * names. * * Some surprising facts about static local variable symbols: * * - It's possible for multiple functions to use the same * static local variable if the variable is defined in an * inlined function. * * - It's also possible for multiple static local variables * with the same name to be used in the same function if they * have different scopes. (We have to assume that in such * cases, the order in which they're referenced remains the * same between the base and patched objects, as there's no * other way to distinguish them.) * * - Static locals are usually referenced by functions, but * they can occasionally be referenced by data sections as * well. */ static void kpatch_correlate_static_local_variables(struct kpatch_elf *base, struct kpatch_elf *patched) { struct symbol *sym, *patched_sym; struct section *sec; struct rela *rela, *rela2; int bundled, patched_bundled, found; /* * First undo the correlations for all static locals. Two static * locals can have the same numbered suffix in the base and patched * objects by coincidence. */ list_for_each_entry(sym, &base->symbols, list) { if (!kpatch_is_normal_static_local(sym)) continue; if (sym->twin) { sym->twin->twin = NULL; sym->twin = NULL; } bundled = sym == sym->sec->sym; if (bundled && sym->sec->twin) { sym->sec->twin->twin = NULL; sym->sec->twin = NULL; sym->sec->secsym->twin->twin = NULL; sym->sec->secsym->twin = NULL; if (sym->sec->rela) { sym->sec->rela->twin->twin = NULL; sym->sec->rela->twin = NULL; } } } /* * Do the correlations: for each section reference to a static local, * look for a corresponding reference in the section's twin. */ list_for_each_entry(sec, &base->sections, list) { if (!is_rela_section(sec) || is_debug_section(sec)) continue; list_for_each_entry(rela, &sec->relas, list) { sym = rela->sym; if (!kpatch_is_normal_static_local(sym)) continue; if (sym->twin) continue; bundled = sym == sym->sec->sym; if (bundled && sym->sec == sec->base) { /* * A rare case where a static local data * structure references itself. There's no * reliable way to correlate this. Hopefully * there's another reference to the symbol * somewhere that can be used. */ log_debug("can't correlate static local %s's reference to itself\n", sym->name); continue; } patched_sym = kpatch_find_static_twin(sec, sym); if (!patched_sym) DIFF_FATAL("reference to static local variable %s in %s was removed", sym->name, kpatch_section_function_name(sec)); patched_bundled = patched_sym == patched_sym->sec->sym; if (bundled != patched_bundled) ERROR("bundle mismatch for symbol %s", sym->name); if (!bundled && sym->sec->twin != patched_sym->sec) ERROR("sections %s and %s aren't correlated", sym->sec->name, patched_sym->sec->name); log_debug("renaming and correlating static local %s to %s\n", patched_sym->name, sym->name); patched_sym->name = strdup(sym->name); sym->twin = patched_sym; patched_sym->twin = sym; /* set initial status, might change */ sym->status = patched_sym->status = SAME; if (bundled) { sym->sec->twin = patched_sym->sec; patched_sym->sec->twin = sym->sec; sym->sec->secsym->twin = patched_sym->sec->secsym; patched_sym->sec->secsym->twin = sym->sec->secsym; if (sym->sec->rela && patched_sym->sec->rela) { sym->sec->rela->twin = patched_sym->sec->rela; patched_sym->sec->rela->twin = sym->sec->rela; } } } } /* * Make sure that: * * 1. all the base object's referenced static locals have been * correlated; and * * 2. each reference to a static local in the base object has a * corresponding reference in the patched object (because a static * local can be referenced by more than one section). */ list_for_each_entry(sec, &base->sections, list) { if (!is_rela_section(sec) || is_debug_section(sec)) continue; list_for_each_entry(rela, &sec->relas, list) { sym = rela->sym; if (!kpatch_is_normal_static_local(sym)) continue; if (!sym->twin || !sec->twin) DIFF_FATAL("reference to static local variable %s in %s was removed", sym->name, kpatch_section_function_name(sec)); found = 0; list_for_each_entry(rela2, &sec->twin->relas, list) { if (rela2->sym == sym->twin) { found = 1; break; } } if (!found) DIFF_FATAL("static local %s has been correlated with %s, but patched %s is missing a reference to it", sym->name, sym->twin->name, kpatch_section_function_name(sec->twin)); } } /* * Now go through the patched object and look for any uncorrelated * static locals to see if we need to print any warnings about new * variables. */ list_for_each_entry(sec, &patched->sections, list) { if (!is_rela_section(sec) || is_debug_section(sec)) continue; list_for_each_entry(rela, &sec->relas, list) { sym = rela->sym; if (!kpatch_is_normal_static_local(sym)) continue; if (sym->twin) continue; log_normal("WARNING: unable to correlate static local variable %s used by %s, assuming variable is new\n", sym->name, kpatch_section_function_name(sec)); return; } } } static void kpatch_correlate_elfs(struct kpatch_elf *kelf1, struct kpatch_elf *kelf2) { kpatch_correlate_sections(&kelf1->sections, &kelf2->sections); kpatch_correlate_symbols(&kelf1->symbols, &kelf2->symbols); } static void kpatch_compare_correlated_elements(struct kpatch_elf *kelf) { /* lists are already correlated at this point */ kpatch_compare_sections(&kelf->sections); kpatch_compare_symbols(&kelf->symbols); } #ifdef __x86_64__ static void rela_insn(struct section *sec, struct rela *rela, struct insn *insn) { unsigned long insn_addr, start, end, rela_addr; start = (unsigned long)sec->base->data->d_buf; end = start + sec->base->sh.sh_size; rela_addr = start + rela->offset; for (insn_addr = start; insn_addr < end; insn_addr += insn->length) { insn_init(insn, (void *)insn_addr, 1); insn_get_length(insn); if (!insn->length) ERROR("can't decode instruction in section %s at offset 0x%lx", sec->base->name, insn_addr); if (rela_addr >= insn_addr && rela_addr < insn_addr + insn->length) return; } } #endif /* * Mangle the relas a little. The compiler will sometimes use section symbols * to reference local objects and functions rather than the object or function * symbols themselves. We substitute the object/function symbols for the * section symbol in this case so that the relas can be properly correlated and * so that the existing object/function in vmlinux can be linked to. */ static void kpatch_replace_sections_syms(struct kpatch_elf *kelf) { struct section *sec; struct rela *rela; struct symbol *sym; int add_off; log_debug("\n"); list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec) || is_debug_section(sec)) continue; list_for_each_entry(rela, &sec->relas, list) { if (rela->sym->type != STT_SECTION) continue; /* * Replace references to bundled sections with their * symbols. */ if (rela->sym->sec && rela->sym->sec->sym) { rela->sym = rela->sym->sec->sym; /* * ppc64le: a GCC 6+ bundled function is at * offset 8 in its section. */ rela->addend -= rela->sym->sym.st_value; continue; } #ifdef __powerpc__ /* * With -mcmodel=large, R_PPC64_REL24 is only used for * functions. Assuming the function is bundled in a * section, the section symbol should have been * replaced with a text symbol already. Otherwise, * bail out. If we hit this situation, more core is * needed here to calculate the value of 'add_off'. */ if (rela->type == R_PPC64_REL24) ERROR("Unexpected relocation type R_PPC64_REL24 for %s\n", rela->sym->name); add_off = 0; #else if (rela->type == R_X86_64_PC32) { struct insn insn; rela_insn(sec, rela, &insn); add_off = (long)insn.next_byte - (long)sec->base->data->d_buf - rela->offset; } else if (rela->type == R_X86_64_64 || rela->type == R_X86_64_32S) add_off = 0; else continue; #endif /* * Attempt to replace references to unbundled sections * with their symbols. */ list_for_each_entry(sym, &kelf->symbols, list) { int start, end; if (sym->type == STT_SECTION || sym->sec != rela->sym->sec) continue; start = sym->sym.st_value; end = sym->sym.st_value + sym->sym.st_size; if (!is_text_section(sym->sec) && rela->type == R_X86_64_32S && rela->addend == sym->sec->sh.sh_size && end == sym->sec->sh.sh_size) { /* * A special case where gcc needs a * pointer to the address at the end of * a data section. * * This is usually used with a compare * instruction to determine when to end * a loop. The code doesn't actually * dereference the pointer so this is * "normal" and we just replace the * section reference with a reference * to the last symbol in the section. * * Note that this only catches the * issue when it happens at the end of * a section. It can also happen in * the middle of a section. In that * case, the wrong symbol will be * associated with the reference. But * that's ok because: * * 1) This situation only occurs when * gcc is trying to get the address * of the symbol, not the contents * of its data; and * * 2) Because kpatch doesn't allow data * sections to change, * &(var1+sizeof(var1)) will always * be the same as &var2. */ } else if (rela->addend + add_off < start || rela->addend + add_off >= end) continue; log_debug("%s: replacing %s+%d reference with %s+%d\n", sec->name, rela->sym->name, rela->addend, sym->name, rela->addend - start); rela->sym = sym; rela->addend -= start; break; } } } log_debug("\n"); } static void kpatch_check_func_profiling_calls(struct kpatch_elf *kelf) { struct symbol *sym; int errs = 0; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type != STT_FUNC || sym->status != CHANGED) continue; if (!sym->twin->has_func_profiling) { log_normal("function %s has no fentry/mcount call, unable to patch\n", sym->name); errs++; } } if (errs) DIFF_FATAL("%d function(s) can not be patched", errs); } static void kpatch_verify_patchability(struct kpatch_elf *kelf) { struct section *sec; int errs = 0; list_for_each_entry(sec, &kelf->sections, list) { if (sec->status == CHANGED && !sec->include) { log_normal("changed section %s not selected for inclusion\n", sec->name); errs++; } if (sec->status != SAME && sec->grouped) { log_normal("changed section %s is part of a section group\n", sec->name); errs++; } if (sec->sh.sh_type == SHT_GROUP && sec->status == NEW) { log_normal("new/changed group sections are not supported\n"); errs++; } /* * ensure we aren't including .data.* or .bss.* * (.data.unlikely is ok b/c it only has __warned vars) */ if (sec->include && sec->status != NEW && (!strncmp(sec->name, ".data", 5) || !strncmp(sec->name, ".bss", 4)) && strcmp(sec->name, ".data.unlikely")) { log_normal("data section %s selected for inclusion\n", sec->name); errs++; } } if (errs) DIFF_FATAL("%d unsupported section change(s)", errs); } #define inc_printf(fmt, ...) \ log_debug("%*s" fmt, recurselevel, "", ##__VA_ARGS__); static void kpatch_include_symbol(struct symbol *sym, int recurselevel) { struct rela *rela; struct section *sec; inc_printf("start include_symbol(%s)\n", sym->name); sym->include = 1; inc_printf("symbol %s is included\n", sym->name); /* * Check if sym is a non-local symbol (sym->sec is NULL) or * if an unchanged local symbol. This a base case for the * inclusion recursion. */ if (!sym->sec || sym->sec->include || (sym->type != STT_SECTION && sym->status == SAME)) goto out; sec = sym->sec; sec->include = 1; inc_printf("section %s is included\n", sec->name); if (sec->secsym && sec->secsym != sym) { sec->secsym->include = 1; inc_printf("section symbol %s is included\n", sec->secsym->name); } if (!sec->rela) goto out; sec->rela->include = 1; inc_printf("section %s is included\n", sec->rela->name); list_for_each_entry(rela, &sec->rela->relas, list) kpatch_include_symbol(rela->sym, recurselevel+1); out: inc_printf("end include_symbol(%s)\n", sym->name); return; } static void kpatch_include_standard_elements(struct kpatch_elf *kelf) { struct section *sec; struct rela *rela; list_for_each_entry(sec, &kelf->sections, list) { /* include these sections even if they haven't changed */ if (!strcmp(sec->name, ".shstrtab") || !strcmp(sec->name, ".strtab") || !strcmp(sec->name, ".symtab") || !strcmp(sec->name, ".toc") || !strcmp(sec->name, ".rodata") || (!strncmp(sec->name, ".rodata.", 8) && strstr(sec->name, ".str1."))) { sec->include = 1; if (sec->secsym) sec->secsym->include = 1; } /* * On ppc64le, the .rela.toc section refers to symbols which * are needed for function symbol relocations. Include all the * symbols. */ if (!strcmp(sec->name, ".rela.toc")) { sec->include = 1; list_for_each_entry(rela, &sec->relas, list) kpatch_include_symbol(rela->sym, 0); } } /* include the NULL symbol */ list_entry(kelf->symbols.next, struct symbol, list)->include = 1; } static int kpatch_include_hook_elements(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; struct rela *rela; int found = 0; /* include load/unload sections */ list_for_each_entry(sec, &kelf->sections, list) { if (!strcmp(sec->name, ".kpatch.hooks.load") || !strcmp(sec->name, ".kpatch.hooks.unload") || !strcmp(sec->name, ".rela.kpatch.hooks.load") || !strcmp(sec->name, ".rela.kpatch.hooks.unload")) { sec->include = 1; found = 1; if (is_rela_section(sec)) { /* include hook dependencies */ rela = list_entry(sec->relas.next, struct rela, list); sym = rela->sym; log_normal("found hook: %s\n",sym->name); kpatch_include_symbol(sym, 0); /* strip the hook symbol */ sym->include = 0; sym->sec->sym = NULL; /* use section symbol instead */ rela->sym = sym->sec->secsym; } else { sec->secsym->include = 1; } } } /* * Strip temporary global load/unload function pointer objects * used by the kpatch_[load|unload]() macros. */ list_for_each_entry(sym, &kelf->symbols, list) if (!strcmp(sym->name, "kpatch_load_data") || !strcmp(sym->name, "kpatch_unload_data")) sym->include = 0; return found; } static void kpatch_include_force_elements(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; struct rela *rela; /* include force sections */ list_for_each_entry(sec, &kelf->sections, list) { if (!strcmp(sec->name, ".kpatch.force") || !strcmp(sec->name, ".rela.kpatch.force")) { sec->include = 1; if (!is_rela_section(sec)) { /* .kpatch.force */ sec->secsym->include = 1; continue; } /* .rela.kpatch.force */ list_for_each_entry(rela, &sec->relas, list) log_normal("function '%s' marked with KPATCH_FORCE_UNSAFE!\n", rela->sym->name); } } /* strip temporary global kpatch_force_func_* symbols */ list_for_each_entry(sym, &kelf->symbols, list) if (!strncmp(sym->name, "__kpatch_force_func_", strlen("__kpatch_force_func_"))) sym->include = 0; } static int kpatch_include_new_globals(struct kpatch_elf *kelf) { struct symbol *sym; int nr = 0; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->bind == STB_GLOBAL && sym->sec && sym->status == NEW) { kpatch_include_symbol(sym, 0); nr++; } } return nr; } static int kpatch_include_changed_functions(struct kpatch_elf *kelf) { struct symbol *sym; int changed_nr = 0; log_debug("\n=== Inclusion Tree ===\n"); list_for_each_entry(sym, &kelf->symbols, list) { if (sym->status == CHANGED && sym->type == STT_FUNC) { changed_nr++; kpatch_include_symbol(sym, 0); } if (sym->type == STT_FILE) sym->include = 1; } return changed_nr; } static void kpatch_print_changes(struct kpatch_elf *kelf) { struct symbol *sym; list_for_each_entry(sym, &kelf->symbols, list) { if (!sym->include || !sym->sec || sym->type != STT_FUNC) continue; if (sym->status == NEW) log_normal("new function: %s\n", sym->name); else if (sym->status == CHANGED) log_normal("changed function: %s\n", sym->name); } } static void kpatch_migrate_symbols(struct list_head *src, struct list_head *dst, int (*select)(struct symbol *)) { struct symbol *sym, *safe; list_for_each_entry_safe(sym, safe, src, list) { if (select && !select(sym)) continue; list_del(&sym->list); list_add_tail(&sym->list, dst); } } static void kpatch_migrate_included_elements(struct kpatch_elf *kelf, struct kpatch_elf **kelfout) { struct section *sec, *safesec; struct symbol *sym, *safesym; struct kpatch_elf *out; /* allocate output kelf */ out = malloc(sizeof(*out)); if (!out) ERROR("malloc"); memset(out, 0, sizeof(*out)); INIT_LIST_HEAD(&out->sections); INIT_LIST_HEAD(&out->symbols); INIT_LIST_HEAD(&out->strings); /* migrate included sections from kelf to out */ list_for_each_entry_safe(sec, safesec, &kelf->sections, list) { if (!sec->include) continue; list_del(&sec->list); list_add_tail(&sec->list, &out->sections); sec->index = 0; if (!is_rela_section(sec) && sec->secsym && !sec->secsym->include) /* break link to non-included section symbol */ sec->secsym = NULL; } /* migrate included symbols from kelf to out */ list_for_each_entry_safe(sym, safesym, &kelf->symbols, list) { if (!sym->include) continue; list_del(&sym->list); list_add_tail(&sym->list, &out->symbols); sym->index = 0; sym->strip = 0; if (sym->sec && !sym->sec->include) /* break link to non-included section */ sym->sec = NULL; } *kelfout = out; } static void kpatch_reorder_symbols(struct kpatch_elf *kelf) { LIST_HEAD(symbols); /* migrate NULL sym */ kpatch_migrate_symbols(&kelf->symbols, &symbols, is_null_sym); /* migrate LOCAL FILE sym */ kpatch_migrate_symbols(&kelf->symbols, &symbols, is_file_sym); /* migrate LOCAL FUNC syms */ kpatch_migrate_symbols(&kelf->symbols, &symbols, is_local_func_sym); /* migrate all other LOCAL syms */ kpatch_migrate_symbols(&kelf->symbols, &symbols, is_local_sym); /* migrate all other (GLOBAL) syms */ kpatch_migrate_symbols(&kelf->symbols, &symbols, NULL); list_replace(&symbols, &kelf->symbols); } static int bug_table_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("BUG_STRUCT_SIZE"); if (!str) ERROR("BUG_STRUCT_SIZE not set"); size = atoi(str); } return size; } static int ex_table_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("EX_STRUCT_SIZE"); if (!str) ERROR("EX_STRUCT_SIZE not set"); size = atoi(str); } return size; } #ifdef __x86_64__ static int parainstructions_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("PARA_STRUCT_SIZE"); if (!str) ERROR("PARA_STRUCT_SIZE not set"); size = atoi(str); } return size; } static int altinstructions_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("ALT_STRUCT_SIZE"); if (!str) ERROR("ALT_STRUCT_SIZE not set"); size = atoi(str); } return size; } static int smp_locks_group_size(struct kpatch_elf *kelf, int offset) { return 4; } #endif #ifdef __powerpc__ static int fixup_entry_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("FIXUP_STRUCT_SIZE"); if (!str) ERROR("FIXUP_STRUCT_SIZE not set"); size = atoi(str); } return size; } static int fixup_lwsync_group_size(struct kpatch_elf *kelf, int offset) { return 4; } #endif /* * The rela groups in the .fixup section vary in size. The beginning of each * .fixup rela group is referenced by the __ex_table section. To find the size * of a .fixup rela group, we have to traverse the __ex_table relas. */ static int fixup_group_size(struct kpatch_elf *kelf, int offset) { struct section *sec; struct rela *rela; int found; sec = find_section_by_name(&kelf->sections, ".rela__ex_table"); if (!sec) ERROR("missing .rela__ex_table section"); /* find beginning of this group */ found = 0; list_for_each_entry(rela, &sec->relas, list) { if (!strcmp(rela->sym->name, ".fixup") && rela->addend == offset) { found = 1; break; } } if (!found) ERROR("can't find .fixup rela group at offset %d\n", offset); /* find beginning of next group */ found = 0; list_for_each_entry_continue(rela, &sec->relas, list) { if (!strcmp(rela->sym->name, ".fixup") && rela->addend > offset) { found = 1; break; } } if (!found) { /* last group */ struct section *fixupsec; fixupsec = find_section_by_name(&kelf->sections, ".fixup"); return fixupsec->sh.sh_size - offset; } return rela->addend - offset; } static struct special_section special_sections[] = { { .name = "__bug_table", .group_size = bug_table_group_size, }, #ifdef __x86_64__ { .name = ".smp_locks", .group_size = smp_locks_group_size, }, { .name = ".parainstructions", .group_size = parainstructions_group_size, }, #endif { .name = ".fixup", .group_size = fixup_group_size, }, { .name = "__ex_table", /* must come after .fixup */ .group_size = ex_table_group_size, }, #ifdef __x86_64__ { .name = ".altinstructions", .group_size = altinstructions_group_size, }, #endif #ifdef __powerpc__ { .name = "__ftr_fixup", .group_size = fixup_entry_group_size, }, { .name = "__mmu_ftr_fixup", .group_size = fixup_entry_group_size, }, { .name = "__fw_ftr_fixup", .group_size = fixup_entry_group_size, }, { .name = "__lwsync_fixup", .group_size = fixup_lwsync_group_size, }, #endif {}, }; static int should_keep_rela_group(struct section *sec, int start, int size) { struct rela *rela; int found = 0; /* check if any relas in the group reference any changed functions */ list_for_each_entry(rela, &sec->relas, list) { if (rela->offset >= start && rela->offset < start + size && rela->sym->type == STT_FUNC && rela->sym->sec->include) { found = 1; log_debug("new/changed symbol %s found in special section %s\n", rela->sym->name, sec->name); } } return found; } /* * When updating .fixup, the corresponding addends in .ex_table need to be * updated too. Stash the result in rela.r_addend so that the calculation in * fixup_group_size() is not affected. */ static void kpatch_update_ex_table_addend(struct kpatch_elf *kelf, struct special_section *special, int src_offset, int dest_offset, int group_size) { struct rela *rela; struct section *sec; sec = find_section_by_name(&kelf->sections, ".rela__ex_table"); if (!sec) ERROR("missing .rela__ex_table section"); list_for_each_entry(rela, &sec->relas, list) { if (!strcmp(rela->sym->name, ".fixup") && rela->addend >= src_offset && rela->addend < src_offset + group_size) rela->rela.r_addend = rela->addend - (src_offset - dest_offset); } } static void kpatch_regenerate_special_section(struct kpatch_elf *kelf, struct special_section *special, struct section *sec) { struct rela *rela, *safe; char *src, *dest; int group_size, src_offset, dest_offset, include; LIST_HEAD(newrelas); src = sec->base->data->d_buf; /* alloc buffer for new base section */ dest = malloc(sec->base->sh.sh_size); if (!dest) ERROR("malloc"); /* Restore the stashed r_addend from kpatch_update_ex_table_addend. */ if (!strcmp(special->name, "__ex_table")) { list_for_each_entry(rela, &sec->relas, list) { if (!strcmp(rela->sym->name, ".fixup")) rela->addend = rela->rela.r_addend; } } group_size = 0; src_offset = 0; dest_offset = 0; for ( ; src_offset < sec->base->sh.sh_size; src_offset += group_size) { group_size = special->group_size(kelf, src_offset); /* * In some cases the struct has padding at the end to ensure * that all structs after it are properly aligned. But the * last struct in the section may not be padded. In that case, * shrink the group_size such that it still (hopefully) * contains the data but doesn't go past the end of the * section. */ if (src_offset + group_size > sec->base->sh.sh_size) group_size = sec->base->sh.sh_size - src_offset; include = should_keep_rela_group(sec, src_offset, group_size); if (!include) continue; /* * Copy all relas in the group. It's possible that the relas * aren't sorted (e.g. .rela.fixup), so go through the entire * rela list each time. */ list_for_each_entry_safe(rela, safe, &sec->relas, list) { if (rela->offset >= src_offset && rela->offset < src_offset + group_size) { /* copy rela entry */ list_del(&rela->list); list_add_tail(&rela->list, &newrelas); rela->offset -= src_offset - dest_offset; rela->rela.r_offset = rela->offset; rela->sym->include = 1; if (!strcmp(special->name, ".fixup")) kpatch_update_ex_table_addend(kelf, special, src_offset, dest_offset, group_size); } } /* copy base section group */ memcpy(dest + dest_offset, src + src_offset, group_size); dest_offset += group_size; } if (!dest_offset) { /* no changed or global functions referenced */ sec->status = sec->base->status = SAME; sec->include = sec->base->include = 0; free(dest); return; } /* overwrite with new relas list */ list_replace(&newrelas, &sec->relas); /* include both rela and base sections */ sec->include = 1; sec->base->include = 1; /* include secsym so .kpatch.arch relas can point to section symbols */ sec->base->secsym->include = 1; /* * Update text section data buf and size. * * The rela section's data buf and size will be regenerated in * kpatch_rebuild_rela_section_data(). */ sec->base->data->d_buf = dest; sec->base->data->d_size = dest_offset; } static void kpatch_include_debug_sections(struct kpatch_elf *kelf) { struct section *sec; struct rela *rela, *saferela; /* include all .debug_* sections */ list_for_each_entry(sec, &kelf->sections, list) { if (is_debug_section(sec)) { sec->include = 1; if (!is_rela_section(sec)) sec->secsym->include = 1; } } /* * Go through the .rela.debug_ sections and strip entries * referencing unchanged symbols */ list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec) || !is_debug_section(sec)) continue; list_for_each_entry_safe(rela, saferela, &sec->relas, list) if (!rela->sym->sec->include) list_del(&rela->list); } } static void kpatch_mark_ignored_sections(struct kpatch_elf *kelf) { struct section *sec, *strsec, *ignoresec; struct rela *rela; char *name; /* Ignore any discarded sections */ list_for_each_entry(sec, &kelf->sections, list) { if (!strncmp(sec->name, ".discard", 8) || !strncmp(sec->name, ".rela.discard", 13)) sec->ignore = 1; } sec = find_section_by_name(&kelf->sections, ".kpatch.ignore.sections"); if (!sec) return; list_for_each_entry(rela, &sec->rela->relas, list) { strsec = rela->sym->sec; strsec->status = CHANGED; /* * Include the string section here. This is because the * KPATCH_IGNORE_SECTION() macro is passed a literal string * by the patch author, resulting in a change to the string * section. If we don't include it, then we will potentially * get a "changed section not included" error in * kpatch_verify_patchability() if no other function based change * also changes the string section. We could try to exclude each * literal string added to the section by KPATCH_IGNORE_SECTION() * from the section data comparison, but this is a simpler way. */ strsec->include = 1; name = strsec->data->d_buf + rela->addend; ignoresec = find_section_by_name(&kelf->sections, name); if (!ignoresec) ERROR("KPATCH_IGNORE_SECTION: can't find %s", name); log_normal("ignoring section: %s\n", name); if (is_rela_section(ignoresec)) ignoresec = ignoresec->base; ignoresec->ignore = 1; if (ignoresec->twin) ignoresec->twin->ignore = 1; } } static void kpatch_mark_ignored_sections_same(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; list_for_each_entry(sec, &kelf->sections, list) { if (!sec->ignore) continue; sec->status = SAME; if (!is_rela_section(sec)) { if (sec->secsym) sec->secsym->status = SAME; if (sec->rela) sec->rela->status = SAME; } list_for_each_entry(sym, &kelf->symbols, list) { if (sym->sec != sec) continue; sym->status = SAME; } } /* strip temporary global __UNIQUE_ID_kpatch_ignore_section_* symbols */ list_for_each_entry(sym, &kelf->symbols, list) if (!strncmp(sym->name, "__UNIQUE_ID_kpatch_ignore_section_", strlen("__UNIQUE_ID_kpatch_ignore_section_"))) sym->status = SAME; } static void kpatch_mark_ignored_functions_same(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; struct rela *rela; sec = find_section_by_name(&kelf->sections, ".kpatch.ignore.functions"); if (!sec) return; list_for_each_entry(rela, &sec->rela->relas, list) { if (!rela->sym->sec) ERROR("expected bundled symbol"); if (rela->sym->type != STT_FUNC) ERROR("expected function symbol"); log_normal("ignoring function: %s\n", rela->sym->name); if (rela->sym->status != CHANGED) log_normal("NOTICE: no change detected in function %s, unnecessary KPATCH_IGNORE_FUNCTION()?\n", rela->sym->name); rela->sym->status = SAME; rela->sym->sec->status = SAME; if (rela->sym->sec->secsym) rela->sym->sec->secsym->status = SAME; if (rela->sym->sec->rela) rela->sym->sec->rela->status = SAME; } /* strip temporary global kpatch_ignore_func_* symbols */ list_for_each_entry(sym, &kelf->symbols, list) if (!strncmp(sym->name, "__kpatch_ignore_func_", strlen("__kpatch_ignore_func_"))) sym->status = SAME; } static void kpatch_create_kpatch_arch_section(struct kpatch_elf *kelf, char *objname) { struct special_section *special; struct kpatch_arch *entries; struct symbol *strsym; struct section *sec, *karch_sec; struct rela *rela; int nr, index = 0; nr = sizeof(special_sections) / sizeof(special_sections[0]); karch_sec = create_section_pair(kelf, ".kpatch.arch", sizeof(*entries), nr); entries = karch_sec->data->d_buf; /* lookup strings symbol */ strsym = find_symbol_by_name(&kelf->symbols, ".kpatch.strings"); if (!strsym) ERROR("can't find .kpatch.strings symbol"); for (special = special_sections; special->name; special++) { if (strcmp(special->name, ".parainstructions") && strcmp(special->name, ".altinstructions")) continue; sec = find_section_by_name(&kelf->sections, special->name); if (!sec) continue; /* entries[index].sec */ ALLOC_LINK(rela, &karch_sec->rela->relas); rela->sym = sec->secsym; rela->type = ABSOLUTE_RELA_TYPE; rela->addend = 0; rela->offset = index * sizeof(*entries) + \ offsetof(struct kpatch_arch, sec); /* entries[index].objname */ ALLOC_LINK(rela, &karch_sec->rela->relas); rela->sym = strsym; rela->type = ABSOLUTE_RELA_TYPE; rela->addend = offset_of_string(&kelf->strings, objname); rela->offset = index * sizeof(*entries) + \ offsetof(struct kpatch_arch, objname); index++; } karch_sec->data->d_size = index * sizeof(struct kpatch_arch); karch_sec->sh.sh_size = karch_sec->data->d_size; } static void kpatch_process_special_sections(struct kpatch_elf *kelf) { struct special_section *special; struct section *sec; struct symbol *sym; struct rela *rela; int altinstr = 0; for (special = special_sections; special->name; special++) { sec = find_section_by_name(&kelf->sections, special->name); if (!sec) continue; sec = sec->rela; if (!sec) continue; kpatch_regenerate_special_section(kelf, special, sec); if (!strcmp(special->name, ".altinstructions") && sec->base->include) altinstr = 1; } /* * The following special sections don't have relas which reference * non-included symbols, so their entire rela section can be included. */ list_for_each_entry(sec, &kelf->sections, list) { if (strcmp(sec->name, ".altinstr_replacement")) continue; /* * Only include .altinstr_replacement if .altinstructions * is also included. */ if (!altinstr) break; /* include base section */ sec->include = 1; /* include all symbols in the section */ list_for_each_entry(sym, &kelf->symbols, list) if (sym->sec == sec) sym->include = 1; /* include rela section */ if (sec->rela) { sec->rela->include = 1; /* include all symbols referenced by relas */ list_for_each_entry(rela, &sec->rela->relas, list) rela->sym->include = 1; } } /* * The following special sections aren't supported, so make sure we * don't ever try to include them. Otherwise the kernel will see the * jump table during module loading and get confused. Generally it * should be safe to exclude them, it just means that you can't modify * jump labels and enable tracepoints in a patched function. */ list_for_each_entry(sec, &kelf->sections, list) { if (strcmp(sec->name, "__jump_table") && strcmp(sec->name, "__tracepoints") && strcmp(sec->name, "__tracepoints_ptrs") && strcmp(sec->name, "__tracepoints_strings")) continue; sec->status = SAME; sec->include = 0; if (sec->rela) { sec->rela->status = SAME; sec->rela->include = 0; } } } static struct sym_compare_type *kpatch_elf_locals(struct kpatch_elf *kelf) { struct symbol *sym; int i = 0, sym_num = 0; struct sym_compare_type *sym_array; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->bind != STB_LOCAL) continue; if (sym->type != STT_FUNC && sym->type != STT_OBJECT) continue; sym_num++; } if (!sym_num) return NULL; sym_array = malloc((sym_num + 1) * sizeof(struct sym_compare_type)); if (!sym_array) ERROR("malloc"); list_for_each_entry(sym, &kelf->symbols, list) { if (sym->bind != STB_LOCAL) continue; if (sym->type != STT_FUNC && sym->type != STT_OBJECT) continue; sym_array[i].type = sym->type; sym_array[i++].name = sym->name; } sym_array[i].type = 0; sym_array[i].name = NULL; return sym_array; } static void kpatch_create_patches_sections(struct kpatch_elf *kelf, struct lookup_table *table, char *hint, char *objname) { int nr, index, objname_offset; struct section *sec, *relasec; struct symbol *sym, *strsym; struct rela *rela; struct lookup_result result; struct kpatch_patch_func *funcs; /* count patched functions */ nr = 0; list_for_each_entry(sym, &kelf->symbols, list) if (sym->type == STT_FUNC && sym->status == CHANGED) nr++; /* create text/rela section pair */ sec = create_section_pair(kelf, ".kpatch.funcs", sizeof(*funcs), nr); relasec = sec->rela; funcs = sec->data->d_buf; /* lookup strings symbol */ strsym = find_symbol_by_name(&kelf->symbols, ".kpatch.strings"); if (!strsym) ERROR("can't find .kpatch.strings symbol"); /* add objname to strings */ objname_offset = offset_of_string(&kelf->strings, objname); /* populate sections */ index = 0; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type == STT_FUNC && sym->status == CHANGED) { if (sym->bind == STB_LOCAL) { if (lookup_local_symbol(table, sym->name, &result)) ERROR("lookup_local_symbol %s (%s)", sym->name, hint); } else { if(lookup_global_symbol(table, sym->name, &result)) ERROR("lookup_global_symbol %s", sym->name); } log_debug("lookup for %s @ 0x%016lx len %lu\n", sym->name, result.value, result.size); /* * Convert global symbols to local so other objects in * the patch module (like the patch hook object's init * code) won't link to this function and call it before * its relocations have been applied. */ sym->bind = STB_LOCAL; sym->sym.st_info = GELF_ST_INFO(sym->bind, sym->type); /* add entry in text section */ funcs[index].old_addr = result.value; funcs[index].old_size = result.size; funcs[index].new_size = sym->sym.st_size; funcs[index].sympos = result.pos; /* * Add a relocation that will populate * the funcs[index].new_addr field at * module load time. */ ALLOC_LINK(rela, &relasec->relas); rela->sym = sym; rela->type = ABSOLUTE_RELA_TYPE; rela->addend = 0; rela->offset = index * sizeof(*funcs); /* * Add a relocation that will populate * the funcs[index].name field. */ ALLOC_LINK(rela, &relasec->relas); rela->sym = strsym; rela->type = ABSOLUTE_RELA_TYPE; rela->addend = offset_of_string(&kelf->strings, sym->name); rela->offset = index * sizeof(*funcs) + offsetof(struct kpatch_patch_func, name); /* * Add a relocation that will populate * the funcs[index].objname field. */ ALLOC_LINK(rela, &relasec->relas); rela->sym = strsym; rela->type = ABSOLUTE_RELA_TYPE; rela->addend = objname_offset; rela->offset = index * sizeof(*funcs) + offsetof(struct kpatch_patch_func,objname); index++; } } /* sanity check, index should equal nr */ if (index != nr) ERROR("size mismatch in funcs sections"); } static int kpatch_is_core_module_symbol(char *name) { return (!strcmp(name, "kpatch_shadow_alloc") || !strcmp(name, "kpatch_shadow_free") || !strcmp(name, "kpatch_shadow_get")); } static void kpatch_create_intermediate_sections(struct kpatch_elf *kelf, struct lookup_table *table, char *hint, char *objname, char *pmod_name) { int nr, index; struct section *sec, *ksym_sec, *krela_sec; struct rela *rela, *rela2, *safe; struct symbol *strsym, *ksym_sec_sym; struct kpatch_symbol *ksyms; struct kpatch_relocation *krelas; struct lookup_result result; char *sym_objname; int ret, vmlinux, external; vmlinux = !strcmp(objname, "vmlinux"); /* count rela entries that need to be dynamic */ nr = 0; list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec)) continue; if (!strcmp(sec->name, ".rela.kpatch.funcs")) continue; list_for_each_entry(rela, &sec->relas, list) nr++; /* upper bound on number of kpatch relas and symbols */ } /* create .kpatch.relocations text/rela section pair */ krela_sec = create_section_pair(kelf, ".kpatch.relocations", sizeof(*krelas), nr); krelas = krela_sec->data->d_buf; /* create .kpatch.symbols text/rela section pair */ ksym_sec = create_section_pair(kelf, ".kpatch.symbols", sizeof(*ksyms), nr); ksyms = ksym_sec->data->d_buf; /* create .kpatch.symbols section symbol (to set rela->sym later) */ ALLOC_LINK(ksym_sec_sym, &kelf->symbols); ksym_sec_sym->sec = ksym_sec; ksym_sec_sym->sym.st_info = GELF_ST_INFO(STB_LOCAL, STT_SECTION); ksym_sec_sym->type = STT_SECTION; ksym_sec_sym->bind = STB_LOCAL; ksym_sec_sym->name = ".kpatch.symbols"; /* lookup strings symbol */ strsym = find_symbol_by_name(&kelf->symbols, ".kpatch.strings"); if (!strsym) ERROR("can't find .kpatch.strings symbol"); /* populate sections */ index = 0; list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec)) continue; if (!strcmp(sec->name, ".rela.kpatch.funcs") || !strcmp(sec->name, ".rela.kpatch.dynrelas")) continue; list_for_each_entry_safe(rela, safe, &sec->relas, list) { if (rela->sym->sec) continue; /* * Allow references to core module symbols to remain as * normal relas, since the core module may not be * compiled into the kernel, and they should be * exported anyway. */ if (kpatch_is_core_module_symbol(rela->sym->name)) continue; external = 0; /* * sym_objname is the name of the object to which * rela->sym belongs. We'll need this to build * ".klp.sym." symbol names later on. * * By default sym_objname is the name of the * component being patched (vmlinux or module). * If it's an external symbol, sym_objname * will get reassigned appropriately. */ sym_objname = objname; /* * On ppc64le, the function prologue generated by GCC 6 * has the sequence: * * .globl my_func * .type my_func, @function * .quad .TOC.-my_func * my_func: * .reloc ., R_PPC64_ENTRY ; optional * ld r2,-8(r12) * add r2,r2,r12 * .localentry my_func, .-my_func * * The R_PPC64_ENTRY is optional and its symbol might * have an empty name. Leave it as a normal rela. */ if (rela->type == R_PPC64_ENTRY) continue; if (rela->sym->bind == STB_LOCAL) { /* An unchanged local symbol */ ret = lookup_local_symbol(table, rela->sym->name, &result); if (ret) ERROR("lookup_local_symbol %s:%s needed for %s", hint, rela->sym->name, sec->base->name); } else if (vmlinux) { /* * We have a patch to vmlinux which references * a global symbol. Use a normal rela for * exported symbols and a dynrela otherwise. */ #ifdef __powerpc__ /* * An exported symbol might be local to an * object file and any access to the function * might be through localentry (toc+offset) * instead of global offset. * * fs/proc/proc_sysctl::sysctl_head_grab: * 166: 0000000000000000 256 FUNC GLOBAL DEFAULT [: 8] 42 unregister_sysctl_table * 167: 0000000000000000 0 NOTYPE GLOBAL DEFAULT UND .TOC. * * These type of symbols have a type of * STT_FUNC. Treat them like local symbols. * They will be handled by the livepatch * relocation code. */ if (lookup_is_exported_symbol(table, rela->sym->name)) { if (rela->sym->type != STT_FUNC) continue; } #else if (lookup_is_exported_symbol(table, rela->sym->name)) continue; #endif /* * If lookup_global_symbol() fails, assume the * symbol is defined in another object in the * patch module. */ if (lookup_global_symbol(table, rela->sym->name, &result)) continue; } else { /* * We have a patch to a module which references * a global symbol. Try to find the symbol in * the module being patched. */ if (lookup_global_symbol(table, rela->sym->name, &result)) { /* * Not there, see if the symbol is * exported, and set sym_objname to the * object the exported symbol belongs * to. If it's not exported, assume sym * is provided by another .o in the * patch module. */ sym_objname = lookup_exported_symbol_objname(table, rela->sym->name); if (!sym_objname) sym_objname = pmod_name; /* * For a symbol exported by vmlinux, use * the original rela. * * For a symbol exported by a module, * convert to a dynrela because the * module might not be loaded yet. */ if (!strcmp(sym_objname, "vmlinux")) continue; external = 1; } } log_debug("lookup for %s @ 0x%016lx len %lu\n", rela->sym->name, result.value, result.size); /* Fill in ksyms[index] */ if (vmlinux) ksyms[index].src = result.value; else /* for modules, src is discovered at runtime */ ksyms[index].src = 0; ksyms[index].pos = result.pos; ksyms[index].type = rela->sym->type; ksyms[index].bind = rela->sym->bind; /* add rela to fill in ksyms[index].name field */ ALLOC_LINK(rela2, &ksym_sec->rela->relas); rela2->sym = strsym; rela2->type = ABSOLUTE_RELA_TYPE; rela2->addend = offset_of_string(&kelf->strings, rela->sym->name); rela2->offset = index * sizeof(*ksyms) + \ offsetof(struct kpatch_symbol, name); /* add rela to fill in ksyms[index].objname field */ ALLOC_LINK(rela2, &ksym_sec->rela->relas); rela2->sym = strsym; rela2->type = ABSOLUTE_RELA_TYPE; rela2->addend = offset_of_string(&kelf->strings, sym_objname); rela2->offset = index * sizeof(*ksyms) + \ offsetof(struct kpatch_symbol, objname); /* Fill in krelas[index] */ krelas[index].addend = rela->addend; krelas[index].type = rela->type; krelas[index].external = external; /* add rela to fill in krelas[index].dest field */ ALLOC_LINK(rela2, &krela_sec->rela->relas); if (sec->base->secsym) rela2->sym = sec->base->secsym; else ERROR("can't create dynrela for section %s (symbol %s): no bundled or section symbol", sec->name, rela->sym->name); rela2->type = ABSOLUTE_RELA_TYPE; rela2->addend = rela->offset; rela2->offset = index * sizeof(*krelas) + \ offsetof(struct kpatch_relocation, dest); /* add rela to fill in krelas[index].objname field */ ALLOC_LINK(rela2, &krela_sec->rela->relas); rela2->sym = strsym; rela2->type = ABSOLUTE_RELA_TYPE; rela2->addend = offset_of_string(&kelf->strings, objname); rela2->offset = index * sizeof(*krelas) + \ offsetof(struct kpatch_relocation, objname); /* add rela to fill in krelas[index].ksym field */ ALLOC_LINK(rela2, &krela_sec->rela->relas); rela2->sym = ksym_sec_sym; rela2->type = ABSOLUTE_RELA_TYPE; rela2->addend = index * sizeof(*ksyms); rela2->offset = index * sizeof(*krelas) + \ offsetof(struct kpatch_relocation, ksym); rela->sym->strip = 1; list_del(&rela->list); free(rela); index++; } } /* set size to actual number of ksyms/krelas */ ksym_sec->data->d_size = index * sizeof(struct kpatch_symbol); ksym_sec->sh.sh_size = ksym_sec->data->d_size; krela_sec->data->d_size = index * sizeof(struct kpatch_relocation); krela_sec->sh.sh_size = krela_sec->data->d_size; } static void kpatch_create_hooks_objname_rela(struct kpatch_elf *kelf, char *objname) { struct section *sec; struct rela *rela; struct symbol *strsym; int objname_offset; /* lookup strings symbol */ strsym = find_symbol_by_name(&kelf->symbols, ".kpatch.strings"); if (!strsym) ERROR("can't find .kpatch.strings symbol"); /* add objname to strings */ objname_offset = offset_of_string(&kelf->strings, objname); list_for_each_entry(sec, &kelf->sections, list) { if (strcmp(sec->name, ".rela.kpatch.hooks.load") && strcmp(sec->name, ".rela.kpatch.hooks.unload")) continue; ALLOC_LINK(rela, &sec->relas); rela->sym = strsym; rela->type = ABSOLUTE_RELA_TYPE; rela->addend = objname_offset; rela->offset = offsetof(struct kpatch_patch_hook, objname); } } #ifdef __powerpc__ void kpatch_create_mcount_sections(struct kpatch_elf *kelf) { } #else /* * This function basically reimplements the functionality of the Linux * recordmcount script, so that patched functions can be recognized by ftrace. * * TODO: Eventually we can modify recordmount so that it recognizes our bundled * sections as valid and does this work for us. */ static void kpatch_create_mcount_sections(struct kpatch_elf *kelf) { int nr, index; struct section *sec, *relasec; struct symbol *sym; struct rela *rela; void **funcs, *newdata; unsigned char *insn; nr = 0; list_for_each_entry(sym, &kelf->symbols, list) if (sym->type == STT_FUNC && sym->status != SAME && sym->has_func_profiling) nr++; /* create text/rela section pair */ sec = create_section_pair(kelf, "__mcount_loc", sizeof(*funcs), nr); relasec = sec->rela; funcs = sec->data->d_buf; /* populate sections */ index = 0; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type != STT_FUNC || sym->status == SAME) continue; if (!sym->has_func_profiling) { log_debug("function %s has no fentry/mcount call, no mcount record is needed\n", sym->name); continue; } /* add rela in .rela__mcount_loc to fill in function pointer */ ALLOC_LINK(rela, &relasec->relas); rela->sym = sym; rela->type = R_X86_64_64; rela->addend = 0; rela->offset = index * sizeof(*funcs); /* * Modify the first instruction of the function to "callq * __fentry__" so that ftrace will be happy. */ newdata = malloc(sym->sec->data->d_size); memcpy(newdata, sym->sec->data->d_buf, sym->sec->data->d_size); sym->sec->data->d_buf = newdata; insn = newdata; if (insn[0] != 0xf) ERROR("%s: unexpected instruction at the start of the function", sym->name); insn[0] = 0xe8; insn[1] = 0; insn[2] = 0; insn[3] = 0; insn[4] = 0; rela = list_first_entry(&sym->sec->rela->relas, struct rela, list); rela->type = R_X86_64_PC32; index++; } /* sanity check, index should equal nr */ if (index != nr) ERROR("size mismatch in funcs sections"); } #endif /* * This function strips out symbols that were referenced by changed rela * sections, but the rela entries that referenced them were converted to * dynrelas and are no longer needed. */ static void kpatch_strip_unneeded_syms(struct kpatch_elf *kelf, struct lookup_table *table) { struct symbol *sym, *safe; list_for_each_entry_safe(sym, safe, &kelf->symbols, list) { if (sym->strip) { list_del(&sym->list); free(sym); } } } static void kpatch_create_strings_elements(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; /* create .kpatch.strings */ /* allocate section resources */ ALLOC_LINK(sec, &kelf->sections); sec->name = ".kpatch.strings"; /* set data */ sec->data = malloc(sizeof(*sec->data)); if (!sec->data) ERROR("malloc"); sec->data->d_type = ELF_T_BYTE; /* set section header */ sec->sh.sh_type = SHT_PROGBITS; sec->sh.sh_entsize = 1; sec->sh.sh_addralign = 1; sec->sh.sh_flags = SHF_ALLOC; /* create .kpatch.strings section symbol (reuse sym variable) */ ALLOC_LINK(sym, &kelf->symbols); sym->sec = sec; sym->sym.st_info = GELF_ST_INFO(STB_LOCAL, STT_SECTION); sym->type = STT_SECTION; sym->bind = STB_LOCAL; sym->name = ".kpatch.strings"; } static void kpatch_build_strings_section_data(struct kpatch_elf *kelf) { struct string *string; struct section *sec; int size; char *strtab; sec = find_section_by_name(&kelf->sections, ".kpatch.strings"); if (!sec) ERROR("can't find .kpatch.strings"); /* determine size */ size = 0; list_for_each_entry(string, &kelf->strings, list) size += strlen(string->name) + 1; /* allocate section resources */ strtab = malloc(size); if (!strtab) ERROR("malloc"); sec->data->d_buf = strtab; sec->data->d_size = size; /* populate strings section data */ list_for_each_entry(string, &kelf->strings, list) { strcpy(strtab, string->name); strtab += strlen(string->name) + 1; } } struct arguments { char *args[6]; int debug; }; static char args_doc[] = "original.o patched.o kernel-object output.o Module.symvers patch-module-name"; static struct argp_option options[] = { {"debug", 'd', NULL, 0, "Show debug output" }, { NULL } }; static error_t parse_opt (int key, char *arg, struct argp_state *state) { /* Get the input argument from argp_parse, which we know is a pointer to our arguments structure. */ struct arguments *arguments = state->input; switch (key) { case 'd': arguments->debug = 1; break; case ARGP_KEY_ARG: if (state->arg_num >= 6) /* Too many arguments. */ argp_usage (state); arguments->args[state->arg_num] = arg; break; case ARGP_KEY_END: if (state->arg_num < 6) /* Not enough arguments. */ argp_usage (state); break; default: return ARGP_ERR_UNKNOWN; } return 0; } static struct argp argp = { options, parse_opt, args_doc, NULL }; int main(int argc, char *argv[]) { struct kpatch_elf *kelf_base, *kelf_patched, *kelf_out; struct arguments arguments; int num_changed, hooks_exist, new_globals_exist; struct lookup_table *lookup; struct section *sec, *symtab; struct symbol *sym; char *hint = NULL, *objname, *pos; char *mod_symvers_path, *pmod_name; struct sym_compare_type *base_locals; arguments.debug = 0; argp_parse (&argp, argc, argv, 0, NULL, &arguments); if (arguments.debug) loglevel = DEBUG; elf_version(EV_CURRENT); childobj = basename(arguments.args[0]); mod_symvers_path = arguments.args[4]; pmod_name = arguments.args[5]; kelf_base = kpatch_elf_open(arguments.args[0]); kelf_patched = kpatch_elf_open(arguments.args[1]); kpatch_bundle_symbols(kelf_base); kpatch_bundle_symbols(kelf_patched); kpatch_compare_elf_headers(kelf_base->elf, kelf_patched->elf); kpatch_check_program_headers(kelf_base->elf); kpatch_check_program_headers(kelf_patched->elf); list_for_each_entry(sym, &kelf_base->symbols, list) { if (sym->type == STT_FILE) { hint = sym->name; break; } } if (!hint) ERROR("FILE symbol not found in base. Stripped?\n"); /* create symbol lookup table */ base_locals = kpatch_elf_locals(kelf_base); lookup = lookup_open(arguments.args[2], mod_symvers_path, hint, base_locals); free(base_locals); kpatch_mark_grouped_sections(kelf_patched); kpatch_replace_sections_syms(kelf_base); kpatch_replace_sections_syms(kelf_patched); kpatch_rename_mangled_functions(kelf_base, kelf_patched); kpatch_correlate_elfs(kelf_base, kelf_patched); kpatch_correlate_static_local_variables(kelf_base, kelf_patched); /* * After this point, we don't care about kelf_base anymore. * We access its sections via the twin pointers in the * section, symbol, and rela lists of kelf_patched. */ kpatch_mark_ignored_sections(kelf_patched); kpatch_compare_correlated_elements(kelf_patched); kpatch_check_func_profiling_calls(kelf_patched); kpatch_elf_teardown(kelf_base); kpatch_elf_free(kelf_base); kpatch_mark_ignored_functions_same(kelf_patched); kpatch_mark_ignored_sections_same(kelf_patched); kpatch_include_standard_elements(kelf_patched); num_changed = kpatch_include_changed_functions(kelf_patched); kpatch_include_debug_sections(kelf_patched); hooks_exist = kpatch_include_hook_elements(kelf_patched); kpatch_include_force_elements(kelf_patched); new_globals_exist = kpatch_include_new_globals(kelf_patched); kpatch_print_changes(kelf_patched); kpatch_dump_kelf(kelf_patched); kpatch_process_special_sections(kelf_patched); kpatch_verify_patchability(kelf_patched); if (!num_changed && !new_globals_exist) { if (hooks_exist) log_debug("no changed functions were found, but hooks exist\n"); else { log_debug("no changed functions were found\n"); return 3; /* 1 is ERROR, 2 is DIFF_FATAL */ } } /* this is destructive to kelf_patched */ kpatch_migrate_included_elements(kelf_patched, &kelf_out); /* * Teardown kelf_patched since we shouldn't access sections or symbols * through it anymore. Don't free however, since our section and symbol * name fields still point to strings in the Elf object owned by * kpatch_patched. */ kpatch_elf_teardown(kelf_patched); /* extract module name (destructive to arguments.modulefile) */ objname = basename(arguments.args[2]); if (!strncmp(objname, "vmlinux-", 8)) objname = "vmlinux"; else { pos = strchr(objname,'.'); if (pos) { /* kernel module */ *pos = '\0'; pos = objname; while ((pos = strchr(pos, '-'))) *pos++ = '_'; } } /* create strings, patches, and dynrelas sections */ kpatch_create_strings_elements(kelf_out); kpatch_create_patches_sections(kelf_out, lookup, hint, objname); kpatch_create_intermediate_sections(kelf_out, lookup, hint, objname, pmod_name); kpatch_create_kpatch_arch_section(kelf_out, objname); kpatch_create_hooks_objname_rela(kelf_out, objname); kpatch_build_strings_section_data(kelf_out); kpatch_create_mcount_sections(kelf_out); /* * At this point, the set of output sections and symbols is * finalized. Reorder the symbols into linker-compliant * order and index all the symbols and sections. After the * indexes have been established, update index data * throughout the structure. */ kpatch_reorder_symbols(kelf_out); kpatch_strip_unneeded_syms(kelf_out, lookup); kpatch_reindex_elements(kelf_out); /* * Update rela section headers and rebuild the rela section data * buffers from the relas lists. */ symtab = find_section_by_name(&kelf_out->sections, ".symtab"); list_for_each_entry(sec, &kelf_out->sections, list) { if (!is_rela_section(sec)) continue; sec->sh.sh_link = symtab->index; sec->sh.sh_info = sec->base->index; kpatch_rebuild_rela_section_data(sec); } kpatch_create_shstrtab(kelf_out); kpatch_create_strtab(kelf_out); kpatch_create_symtab(kelf_out); kpatch_dump_kelf(kelf_out); kpatch_write_output_elf(kelf_out, kelf_patched->elf, arguments.args[3]); kpatch_elf_free(kelf_patched); kpatch_elf_teardown(kelf_out); kpatch_elf_free(kelf_out); return 0; } kpatch-0.5.0/kpatch-build/create-klp-module.c000066400000000000000000000322311321664017000210140ustar00rootroot00000000000000/* * create-klp-module.c * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #include #include #include #include #include "log.h" #include "kpatch-elf.h" #include "kpatch-intermediate.h" /* For log.h */ char *childobj; enum loglevel loglevel = NORMAL; /* * Add a symbol from .kpatch.symbols to the symbol table * * If a symbol matching the .kpatch.symbols entry already * exists, return it. */ static struct symbol *find_or_add_ksym_to_symbols(struct kpatch_elf *kelf, struct section *ksymsec, char *strings, int offset) { struct kpatch_symbol *ksyms, *ksym; struct symbol *sym; struct rela *rela; char *objname, *name; char pos[32], buf[256]; int index; ksyms = ksymsec->data->d_buf; index = offset / sizeof(*ksyms); ksym = &ksyms[index]; /* Get name of ksym */ rela = find_rela_by_offset(ksymsec->rela, offset + offsetof(struct kpatch_symbol, name)); if (!rela) ERROR("name of ksym not found?"); name = strdup(strings + rela->addend); if (!name) ERROR("strdup"); /* Get objname of ksym */ rela = find_rela_by_offset(ksymsec->rela, offset + offsetof(struct kpatch_symbol, objname)); if (!rela) ERROR("objname of ksym not found?"); objname = strdup(strings + rela->addend); if (!objname) ERROR("strdup"); snprintf(pos, 32, "%lu", ksym->pos); /* .klp.sym.objname.name,pos */ snprintf(buf, 256, KLP_SYM_PREFIX "%s.%s,%s", objname, name, pos); /* Look for an already allocated symbol */ list_for_each_entry(sym, &kelf->symbols, list) { if (!strcmp(buf, sym->name)) return sym; } ALLOC_LINK(sym, &kelf->symbols); sym->name = strdup(buf); if (!sym->name) ERROR("strdup"); sym->type = ksym->type; sym->bind = ksym->bind; /* * Note that st_name will be set in kpatch_create_strtab(), * and sym->index is set in kpatch_reindex_elements() */ sym->sym.st_shndx = SHN_LIVEPATCH; sym->sym.st_info = GELF_ST_INFO(sym->bind, sym->type); return sym; } /* * Create a klp rela section given the base section and objname * * If a klp rela section matching the base section and objname * already exists, return it. */ static struct section *find_or_add_klp_relasec(struct kpatch_elf *kelf, struct section *base, char *objname) { struct section *sec; char buf[256]; /* .klp.rela.objname.secname */ snprintf(buf, 256, KLP_RELASEC_PREFIX "%s.%s", objname, base->name); list_for_each_entry(sec, &kelf->sections, list) { if (!strcmp(sec->name, buf)) return sec; } ALLOC_LINK(sec, &kelf->sections); sec->name = strdup(buf); if (!sec->name) ERROR("strdup"); sec->base = base; INIT_LIST_HEAD(&sec->relas); sec->data = malloc(sizeof(*sec->data)); if (!sec->data) ERROR("malloc"); sec->data->d_type = ELF_T_RELA; /* sh_info and sh_link are set when rebuilding rela sections */ sec->sh.sh_type = SHT_RELA; sec->sh.sh_entsize = sizeof(GElf_Rela); sec->sh.sh_addralign = 8; sec->sh.sh_flags = SHF_RELA_LIVEPATCH | SHF_INFO_LINK | SHF_ALLOC; return sec; } /* * Create klp relocation sections and klp symbols from .kpatch.relocations * and .kpatch.symbols sections * * For every entry in .kpatch.relocations: * 1) Allocate a symbol for the corresponding .kpatch.symbols entry if * it doesn't already exist (find_or_add_ksym_to_symbols()) * This is the symbol that the relocation points to (rela->sym) * 2) Allocate a rela, and add it to the corresponding .klp.rela. section. If * the matching .klp.rela. section (given the base section and objname) * doesn't exist yet, create it (find_or_add_klp_relasec()) */ static void create_klp_relasecs_and_syms(struct kpatch_elf *kelf, struct section *krelasec, struct section *ksymsec, char *strings) { struct section *klp_relasec; struct kpatch_relocation *krelas; struct symbol *sym, *dest; struct rela *rela; char *objname; int nr, index, offset, dest_off; krelas = krelasec->data->d_buf; nr = krelasec->data->d_size / sizeof(*krelas); for (index = 0; index < nr; index++) { offset = index * sizeof(*krelas); /* Get the rela dest sym + offset */ rela = find_rela_by_offset(krelasec->rela, offset + offsetof(struct kpatch_relocation, dest)); if (!rela) ERROR("find_rela_by_offset"); dest = rela->sym; dest_off = rela->addend; /* Get the name of the object the dest belongs to */ rela = find_rela_by_offset(krelasec->rela, offset + offsetof(struct kpatch_relocation, objname)); if (!rela) ERROR("find_rela_by_offset"); objname = strdup(strings + rela->addend); if (!objname) ERROR("strdup"); /* Get the .kpatch.symbol entry for the rela src */ rela = find_rela_by_offset(krelasec->rela, offset + offsetof(struct kpatch_relocation, ksym)); if (!rela) ERROR("find_rela_by_offset"); /* Create (or find) a klp symbol from the rela src entry */ sym = find_or_add_ksym_to_symbols(kelf, ksymsec, strings, rela->addend); if (!sym) ERROR("error finding or adding ksym to symtab"); /* Create (or find) the .klp.rela. section for the dest sec and object */ klp_relasec = find_or_add_klp_relasec(kelf, dest->sec, objname); if (!klp_relasec) ERROR("error finding or adding klp relasec"); /* Add the klp rela to the .klp.rela. section */ ALLOC_LINK(rela, &klp_relasec->relas); rela->offset = dest->sym.st_value + dest_off; rela->type = krelas[index].type; rela->sym = sym; rela->addend = krelas[index].addend; } } /* * Create .klp.arch. sections by iterating through the .kpatch.arch section * * A .kpatch.arch section is just an array of kpatch_arch structs: * * struct kpatch_arch { * unsigned long sec; * char *objname; * }; * * There are two relas associated with each kpatch arch entry, one that points * to the section of interest (.parainstructions or .altinstructions), and one * rela points to the name of the object the section belongs to in * .kpatch.strings. This gives us the necessary information to create .klp.arch * sections, which use the '.klp.arch.objname.secname' name format. */ static void create_klp_arch_sections(struct kpatch_elf *kelf, char *strings) { struct section *karch, *sec, *base = NULL; struct kpatch_arch *entries; struct rela *rela, *rela2; char *secname, *objname = NULL; char buf[256]; int nr, index, offset, old_size, new_size; karch = find_section_by_name(&kelf->sections, ".kpatch.arch"); if (!karch) return; entries = karch->data->d_buf; nr = karch->data->d_size / sizeof(*entries); for (index = 0; index < nr; index++) { offset = index * sizeof(*entries); /* Get the base section (.parainstructions or .altinstructions) */ rela = find_rela_by_offset(karch->rela, offset + offsetof(struct kpatch_arch, sec)); if (!rela) ERROR("find_rela_by_offset"); base = rela->sym->sec; if (!base) ERROR("base sec of kpatch_arch entry not found"); /* Get the name of the object the base section belongs to */ rela = find_rela_by_offset(karch->rela, offset + offsetof(struct kpatch_arch, objname)); if (!rela) ERROR("find_rela_by_offset"); objname = strdup(strings + rela->addend); if (!objname) ERROR("strdup"); /* Example: .klp.arch.vmlinux..parainstructions */ snprintf(buf, 256, "%s%s.%s", KLP_ARCH_PREFIX, objname, base->name); /* Check if the .klp.arch. section already exists */ sec = find_section_by_name(&kelf->sections, buf); if (!sec) { secname = strdup(buf); if (!secname) ERROR("strdup"); /* Start with a new section with size 0 first */ sec = create_section_pair(kelf, secname, 1, 0); } /* * Merge .klp.arch. sections if necessary * * Example: * If there are multiple .parainstructions sections for vmlinux * (this can happen when, using the --unique option for ld, * we've linked together multiple .o's with .parainstructions * sections for the same object), they will be merged under a * single .klp.arch.vmlinux..parainstructions section */ old_size = sec->data->d_size; new_size = old_size + base->data->d_size; sec->data->d_buf = realloc(sec->data->d_buf, new_size); sec->data->d_size = new_size; sec->sh.sh_size = sec->data->d_size; memcpy(sec->data->d_buf + old_size, base->data->d_buf, base->data->d_size); list_for_each_entry(rela, &base->rela->relas, list) { ALLOC_LINK(rela2, &sec->rela->relas); rela2->sym = rela->sym; rela2->type = rela->type; rela2->addend = rela->addend; rela2->offset = old_size + rela->offset; } } } /* * We can't keep these sections since the module loader will apply them before * the patch module gets a chance to load (that's why we copied these sections * into .klp.arch. sections. Hence we remove them here. */ static void remove_arch_sections(struct kpatch_elf *kelf) { int i; char *arch_sections[] = { ".parainstructions", ".rela.parainstructions", ".altinstructions", ".rela.altinstructions" }; for (i = 0; i < sizeof(arch_sections)/sizeof(arch_sections[0]); i++) kpatch_remove_and_free_section(kelf, arch_sections[i]); } static void remove_intermediate_sections(struct kpatch_elf *kelf) { int i; char *intermediate_sections[] = { ".kpatch.symbols", ".rela.kpatch.symbols", ".kpatch.relocations", ".rela.kpatch.relocations", ".kpatch.arch", ".rela.kpatch.arch" }; for (i = 0; i < sizeof(intermediate_sections)/sizeof(intermediate_sections[0]); i++) kpatch_remove_and_free_section(kelf, intermediate_sections[i]); } struct arguments { char *args[2]; int debug; int no_klp_arch; }; static char args_doc[] = "input.ko output.ko"; static struct argp_option options[] = { {"debug", 'd', 0, 0, "Show debug output" }, {"no-klp-arch-sections", 'n', 0, 0, "Do not output .klp.arch.* sections" }, { 0 } }; static error_t parse_opt (int key, char *arg, struct argp_state *state) { /* Get the input argument from argp_parse, which we know is a pointer to our arguments structure. */ struct arguments *arguments = state->input; switch (key) { case 'd': arguments->debug = 1; break; case 'n': arguments->no_klp_arch = 1; break; case ARGP_KEY_ARG: if (state->arg_num >= 2) /* Too many arguments. */ argp_usage (state); arguments->args[state->arg_num] = arg; break; case ARGP_KEY_END: if (state->arg_num < 2) /* Not enough arguments. */ argp_usage (state); break; default: return ARGP_ERR_UNKNOWN; } return 0; } static struct argp argp = { options, parse_opt, args_doc, 0 }; int main(int argc, char *argv[]) { struct kpatch_elf *kelf; struct section *symtab, *sec; struct section *ksymsec, *krelasec, *strsec; struct arguments arguments; char *strings; int ksyms_nr, krelas_nr; memset(&arguments, 0, sizeof(arguments)); argp_parse (&argp, argc, argv, 0, 0, &arguments); if (arguments.debug) loglevel = DEBUG; elf_version(EV_CURRENT); childobj = basename(arguments.args[0]); kelf = kpatch_elf_open(arguments.args[0]); /* * Sanity checks: * - Make sure all the required sections exist * - Make sure that the number of entries in * .kpatch.{symbols,relocations} match */ strsec = find_section_by_name(&kelf->sections, ".kpatch.strings"); if (!strsec) ERROR("missing .kpatch.strings"); strings = strsec->data->d_buf; ksymsec = find_section_by_name(&kelf->sections, ".kpatch.symbols"); if (!ksymsec) ERROR("missing .kpatch.symbols section"); ksyms_nr = ksymsec->data->d_size / sizeof(struct kpatch_symbol); krelasec = find_section_by_name(&kelf->sections, ".kpatch.relocations"); if (!krelasec) ERROR("missing .kpatch.relocations section"); krelas_nr = krelasec->data->d_size / sizeof(struct kpatch_relocation); if (krelas_nr != ksyms_nr) ERROR("number of krelas and ksyms do not match"); /* * Create klp rela sections and klp symbols from * .kpatch.{relocations,symbols} sections */ create_klp_relasecs_and_syms(kelf, krelasec, ksymsec, strings); /* * If --no-klp-arch-sections wasn't set, additionally * create .klp.arch. sections */ if (!arguments.no_klp_arch) { create_klp_arch_sections(kelf, strings); remove_arch_sections(kelf); } remove_intermediate_sections(kelf); kpatch_reindex_elements(kelf); /* Rebuild rela sections, new klp rela sections will be rebuilt too. */ symtab = find_section_by_name(&kelf->sections, ".symtab"); list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec)) continue; sec->sh.sh_link = symtab->index; sec->sh.sh_info = sec->base->index; kpatch_rebuild_rela_section_data(sec); } kpatch_create_shstrtab(kelf); kpatch_create_strtab(kelf); kpatch_create_symtab(kelf); kpatch_write_output_elf(kelf, kelf->elf, arguments.args[1]); kpatch_elf_teardown(kelf); kpatch_elf_free(kelf); return 0; } kpatch-0.5.0/kpatch-build/create-kpatch-module.c000066400000000000000000000156551321664017000215130ustar00rootroot00000000000000/* * create-kpatch-module.c * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #include #include #include #include #include "log.h" #include "kpatch-elf.h" #include "kpatch-intermediate.h" #include "kpatch-patch.h" /* For log.h */ char *childobj; enum loglevel loglevel = NORMAL; /* * Create .kpatch.dynrelas from .kpatch.relocations and .kpatch.symbols sections * * Iterate through .kpatch.relocations and fill in the corresponding dynrela * entry using information from .kpatch.relocations and .kpatch.symbols */ static void create_dynamic_rela_sections(struct kpatch_elf *kelf, struct section *krelasec, struct section *ksymsec, struct section *strsec) { struct kpatch_patch_dynrela *dynrelas; struct kpatch_relocation *krelas; struct kpatch_symbol *ksym, *ksyms; struct section *dynsec; struct symbol *sym; struct rela *rela; int index, nr, offset, dest_offset, objname_offset, name_offset; ksyms = ksymsec->data->d_buf; krelas = krelasec->data->d_buf; nr = krelasec->data->d_size / sizeof(*krelas); dynsec = create_section_pair(kelf, ".kpatch.dynrelas", sizeof(*dynrelas), nr); dynrelas = dynsec->data->d_buf; for (index = 0; index < nr; index++) { offset = index * sizeof(*krelas); /* * To fill in each dynrela entry, find dest location, * objname offset, ksym, and symbol name offset */ /* Get dest location */ rela = find_rela_by_offset(krelasec->rela, offset + offsetof(struct kpatch_relocation, dest)); if (!rela) ERROR("find_rela_by_offset"); sym = rela->sym; dest_offset = rela->addend; /* Get objname offset */ rela = find_rela_by_offset(krelasec->rela, offset + offsetof(struct kpatch_relocation, objname)); if (!rela) ERROR("find_rela_by_offset"); objname_offset = rela->addend; /* Get ksym (.kpatch.symbols entry) and symbol name offset */ rela = find_rela_by_offset(krelasec->rela, offset + offsetof(struct kpatch_relocation, ksym)); if (!rela) ERROR("find_rela_by_offset"); ksym = ksyms + (rela->addend / sizeof(*ksyms)); offset = index * sizeof(*ksyms); rela = find_rela_by_offset(ksymsec->rela, offset + offsetof(struct kpatch_symbol, name)); if (!rela) ERROR("find_rela_by_offset"); name_offset = rela->addend; /* Fill in dynrela entry */ dynrelas[index].src = ksym->src; dynrelas[index].addend = krelas[index].addend; dynrelas[index].type = krelas[index].type; dynrelas[index].external = krelas[index].external; dynrelas[index].sympos = ksym->pos; /* dest */ ALLOC_LINK(rela, &dynsec->rela->relas); rela->sym = sym; rela->type = R_X86_64_64; rela->addend = dest_offset; rela->offset = index * sizeof(*dynrelas); /* name */ ALLOC_LINK(rela, &dynsec->rela->relas); rela->sym = strsec->secsym; rela->type = R_X86_64_64; rela->addend = name_offset; rela->offset = index * sizeof(*dynrelas) + \ offsetof(struct kpatch_patch_dynrela, name); /* objname */ ALLOC_LINK(rela, &dynsec->rela->relas); rela->sym = strsec->secsym; rela->type = R_X86_64_64; rela->addend = objname_offset; rela->offset = index * sizeof(*dynrelas) + \ offsetof(struct kpatch_patch_dynrela, objname); } } static void remove_intermediate_sections(struct kpatch_elf *kelf) { int i; char *intermediate_sections[] = { ".kpatch.symbols", ".rela.kpatch.symbols", ".kpatch.relocations", ".rela.kpatch.relocations", ".kpatch.arch", ".rela.kpatch.arch" }; for (i = 0; i < sizeof(intermediate_sections)/sizeof(intermediate_sections[0]); i++) kpatch_remove_and_free_section(kelf, intermediate_sections[i]); } struct arguments { char *args[2]; int debug; }; static char args_doc[] = "input.o output.o"; static struct argp_option options[] = { {"debug", 'd', 0, 0, "Show debug output" }, { 0 } }; static error_t parse_opt (int key, char *arg, struct argp_state *state) { /* Get the input argument from argp_parse, which we know is a pointer to our arguments structure. */ struct arguments *arguments = state->input; switch (key) { case 'd': arguments->debug = 1; break; case ARGP_KEY_ARG: if (state->arg_num >= 2) /* Too many arguments. */ argp_usage (state); arguments->args[state->arg_num] = arg; break; case ARGP_KEY_END: if (state->arg_num < 2) /* Not enough arguments. */ argp_usage (state); break; default: return ARGP_ERR_UNKNOWN; } return 0; } static struct argp argp = { options, parse_opt, args_doc, 0 }; int main(int argc, char *argv[]) { struct kpatch_elf *kelf; struct section *symtab, *sec; struct section *ksymsec, *krelasec, *strsec; struct arguments arguments; int ksyms_nr, krelas_nr; arguments.debug = 0; argp_parse (&argp, argc, argv, 0, 0, &arguments); if (arguments.debug) loglevel = DEBUG; elf_version(EV_CURRENT); childobj = basename(arguments.args[0]); kelf = kpatch_elf_open(arguments.args[0]); /* * Sanity checks: * - Make sure all the required sections exist * - Make sure that the number of entries in * .kpatch.{symbols,relocations} match */ strsec = find_section_by_name(&kelf->sections, ".kpatch.strings"); if (!strsec) ERROR("missing .kpatch.strings"); ksymsec = find_section_by_name(&kelf->sections, ".kpatch.symbols"); if (!ksymsec) ERROR("missing .kpatch.symbols section"); ksyms_nr = ksymsec->data->d_size / sizeof(struct kpatch_symbol); krelasec = find_section_by_name(&kelf->sections, ".kpatch.relocations"); if (!krelasec) ERROR("missing .kpatch.relocations section"); krelas_nr = krelasec->data->d_size / sizeof(struct kpatch_relocation); if (krelas_nr != ksyms_nr) ERROR("number of krelas and ksyms do not match"); /* Create dynrelas from .kpatch.{relocations,symbols} sections */ create_dynamic_rela_sections(kelf, krelasec, ksymsec, strsec); remove_intermediate_sections(kelf); kpatch_reindex_elements(kelf); symtab = find_section_by_name(&kelf->sections, ".symtab"); list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec)) continue; sec->sh.sh_link = symtab->index; sec->sh.sh_info = sec->base->index; kpatch_rebuild_rela_section_data(sec); } kpatch_create_shstrtab(kelf); kpatch_create_strtab(kelf); kpatch_create_symtab(kelf); kpatch_write_output_elf(kelf, kelf->elf, arguments.args[1]); kpatch_elf_teardown(kelf); kpatch_elf_free(kelf); return 0; } kpatch-0.5.0/kpatch-build/gcc-plugins/000077500000000000000000000000001321664017000175505ustar00rootroot00000000000000kpatch-0.5.0/kpatch-build/gcc-plugins/gcc-common.h000066400000000000000000000570051321664017000217520ustar00rootroot00000000000000#ifndef GCC_COMMON_H_INCLUDED #define GCC_COMMON_H_INCLUDED #include "bversion.h" #if BUILDING_GCC_VERSION >= 6000 #include "gcc-plugin.h" #else #include "plugin.h" #endif #include "plugin-version.h" #include "config.h" #include "system.h" #include "coretypes.h" #include "tm.h" #include "line-map.h" #include "input.h" #include "tree.h" #include "tree-inline.h" #include "version.h" #include "rtl.h" #include "tm_p.h" #include "flags.h" #include "hard-reg-set.h" #include "output.h" #include "except.h" #include "function.h" #include "toplev.h" #if BUILDING_GCC_VERSION >= 5000 #include "expr.h" #endif #include "basic-block.h" #include "intl.h" #include "ggc.h" #include "timevar.h" #include "params.h" #if BUILDING_GCC_VERSION <= 4009 #include "pointer-set.h" #else #include "hash-map.h" #endif #if BUILDING_GCC_VERSION >= 7000 #include "memmodel.h" #endif #include "emit-rtl.h" #include "debug.h" #include "target.h" #include "langhooks.h" #include "cfgloop.h" #include "cgraph.h" #include "opts.h" #if BUILDING_GCC_VERSION == 4005 #include #endif #if BUILDING_GCC_VERSION >= 4007 #include "tree-pretty-print.h" #include "gimple-pretty-print.h" #endif #if BUILDING_GCC_VERSION >= 4006 /* * The c-family headers were moved into a subdirectory in GCC version * 4.7, but most plugin-building users of GCC 4.6 are using the Debian * or Ubuntu package, which has an out-of-tree patch to move this to the * same location as found in 4.7 and later: * https://sources.debian.net/src/gcc-4.6/4.6.3-14/debian/patches/pr45078.diff/ */ #include "c-family/c-common.h" #else #include "c-common.h" #endif #if BUILDING_GCC_VERSION <= 4008 #include "tree-flow.h" #else #include "tree-cfgcleanup.h" #include "tree-ssa-operands.h" #include "tree-into-ssa.h" #endif #if BUILDING_GCC_VERSION >= 4008 #include "is-a.h" #endif #include "diagnostic.h" #include "tree-dump.h" #include "tree-pass.h" #if BUILDING_GCC_VERSION >= 4009 #include "pass_manager.h" #endif #include "predict.h" #include "ipa-utils.h" #if BUILDING_GCC_VERSION >= 4009 #include "attribs.h" #include "varasm.h" #include "stor-layout.h" #include "internal-fn.h" #include "gimple-expr.h" #include "gimple-fold.h" #include "context.h" #include "tree-ssa-alias.h" #include "tree-ssa.h" #include "stringpool.h" #if BUILDING_GCC_VERSION >= 7000 #include "tree-vrp.h" #endif #include "tree-ssanames.h" #include "print-tree.h" #include "tree-eh.h" #include "stmt.h" #include "gimplify.h" #endif #include "gimple.h" #if BUILDING_GCC_VERSION >= 4009 #include "tree-ssa-operands.h" #include "tree-phinodes.h" #include "tree-cfg.h" #include "gimple-iterator.h" #include "gimple-ssa.h" #include "ssa-iterators.h" #endif #if BUILDING_GCC_VERSION >= 5000 #include "builtins.h" #endif /* missing from basic_block.h... */ void debug_dominance_info(enum cdi_direction dir); void debug_dominance_tree(enum cdi_direction dir, basic_block root); #if BUILDING_GCC_VERSION == 4006 void debug_gimple_stmt(gimple); void debug_gimple_seq(gimple_seq); void print_gimple_seq(FILE *, gimple_seq, int, int); void print_gimple_stmt(FILE *, gimple, int, int); void print_gimple_expr(FILE *, gimple, int, int); void dump_gimple_stmt(pretty_printer *, gimple, int, int); #endif #define __unused __attribute__((__unused__)) #define __visible __attribute__((visibility("default"))) #define DECL_NAME_POINTER(node) IDENTIFIER_POINTER(DECL_NAME(node)) #define DECL_NAME_LENGTH(node) IDENTIFIER_LENGTH(DECL_NAME(node)) #define TYPE_NAME_POINTER(node) IDENTIFIER_POINTER(TYPE_NAME(node)) #define TYPE_NAME_LENGTH(node) IDENTIFIER_LENGTH(TYPE_NAME(node)) /* should come from c-tree.h if only it were installed for gcc 4.5... */ #define C_TYPE_FIELDS_READONLY(TYPE) TREE_LANG_FLAG_1(TYPE) static inline tree build_const_char_string(int len, const char *str) { tree cstr, elem, index, type; cstr = build_string(len, str); elem = build_type_variant(char_type_node, 1, 0); index = build_index_type(size_int(len - 1)); type = build_array_type(elem, index); TREE_TYPE(cstr) = type; TREE_CONSTANT(cstr) = 1; TREE_READONLY(cstr) = 1; TREE_STATIC(cstr) = 1; return cstr; } #define PASS_INFO(NAME, REF, ID, POS) \ struct register_pass_info NAME##_pass_info = { \ .pass = make_##NAME##_pass(), \ .reference_pass_name = REF, \ .ref_pass_instance_number = ID, \ .pos_op = POS, \ } #if BUILDING_GCC_VERSION == 4005 #define FOR_EACH_LOCAL_DECL(FUN, I, D) \ for (tree vars = (FUN)->local_decls, (I) = 0; \ vars && ((D) = TREE_VALUE(vars)); \ vars = TREE_CHAIN(vars), (I)++) #define DECL_CHAIN(NODE) (TREE_CHAIN(DECL_MINIMAL_CHECK(NODE))) #define FOR_EACH_VEC_ELT(T, V, I, P) \ for (I = 0; VEC_iterate(T, (V), (I), (P)); ++(I)) #define TODO_rebuild_cgraph_edges 0 #define SCOPE_FILE_SCOPE_P(EXP) (!(EXP)) #ifndef O_BINARY #define O_BINARY 0 #endif typedef struct varpool_node *varpool_node_ptr; static inline bool gimple_call_builtin_p(gimple stmt, enum built_in_function code) { tree fndecl; if (!is_gimple_call(stmt)) return false; fndecl = gimple_call_fndecl(stmt); if (!fndecl || DECL_BUILT_IN_CLASS(fndecl) != BUILT_IN_NORMAL) return false; return DECL_FUNCTION_CODE(fndecl) == code; } static inline bool is_simple_builtin(tree decl) { if (decl && DECL_BUILT_IN_CLASS(decl) != BUILT_IN_NORMAL) return false; switch (DECL_FUNCTION_CODE(decl)) { /* Builtins that expand to constants. */ case BUILT_IN_CONSTANT_P: case BUILT_IN_EXPECT: case BUILT_IN_OBJECT_SIZE: case BUILT_IN_UNREACHABLE: /* Simple register moves or loads from stack. */ case BUILT_IN_RETURN_ADDRESS: case BUILT_IN_EXTRACT_RETURN_ADDR: case BUILT_IN_FROB_RETURN_ADDR: case BUILT_IN_RETURN: case BUILT_IN_AGGREGATE_INCOMING_ADDRESS: case BUILT_IN_FRAME_ADDRESS: case BUILT_IN_VA_END: case BUILT_IN_STACK_SAVE: case BUILT_IN_STACK_RESTORE: /* Exception state returns or moves registers around. */ case BUILT_IN_EH_FILTER: case BUILT_IN_EH_POINTER: case BUILT_IN_EH_COPY_VALUES: return true; default: return false; } } static inline void add_local_decl(struct function *fun, tree d) { gcc_assert(TREE_CODE(d) == VAR_DECL); fun->local_decls = tree_cons(NULL_TREE, d, fun->local_decls); } #endif #if BUILDING_GCC_VERSION <= 4006 #define ANY_RETURN_P(rtx) (GET_CODE(rtx) == RETURN) #define C_DECL_REGISTER(EXP) DECL_LANG_FLAG_4(EXP) #define EDGE_PRESERVE 0ULL #define HOST_WIDE_INT_PRINT_HEX_PURE "%" HOST_WIDE_INT_PRINT "x" #define flag_fat_lto_objects true #define get_random_seed(noinit) ({ \ unsigned HOST_WIDE_INT seed; \ sscanf(get_random_seed(noinit), "%" HOST_WIDE_INT_PRINT "x", &seed); \ seed * seed; }) #define int_const_binop(code, arg1, arg2) \ int_const_binop((code), (arg1), (arg2), 0) static inline bool gimple_clobber_p(gimple s __unused) { return false; } static inline bool gimple_asm_clobbers_memory_p(const_gimple stmt) { unsigned i; for (i = 0; i < gimple_asm_nclobbers(stmt); i++) { tree op = gimple_asm_clobber_op(stmt, i); if (!strcmp(TREE_STRING_POINTER(TREE_VALUE(op)), "memory")) return true; } return false; } static inline tree builtin_decl_implicit(enum built_in_function fncode) { return implicit_built_in_decls[fncode]; } static inline int ipa_reverse_postorder(struct cgraph_node **order) { return cgraph_postorder(order); } static inline struct cgraph_node *cgraph_create_node(tree decl) { return cgraph_node(decl); } static inline struct cgraph_node *cgraph_get_create_node(tree decl) { struct cgraph_node *node = cgraph_get_node(decl); return node ? node : cgraph_node(decl); } static inline bool cgraph_function_with_gimple_body_p(struct cgraph_node *node) { return node->analyzed && !node->thunk.thunk_p && !node->alias; } static inline struct cgraph_node *cgraph_first_function_with_gimple_body(void) { struct cgraph_node *node; for (node = cgraph_nodes; node; node = node->next) if (cgraph_function_with_gimple_body_p(node)) return node; return NULL; } static inline struct cgraph_node *cgraph_next_function_with_gimple_body(struct cgraph_node *node) { for (node = node->next; node; node = node->next) if (cgraph_function_with_gimple_body_p(node)) return node; return NULL; } static inline bool cgraph_for_node_and_aliases(cgraph_node_ptr node, bool (*callback)(cgraph_node_ptr, void *), void *data, bool include_overwritable) { cgraph_node_ptr alias; if (callback(node, data)) return true; for (alias = node->same_body; alias; alias = alias->next) { if (include_overwritable || cgraph_function_body_availability(alias) > AVAIL_OVERWRITABLE) if (cgraph_for_node_and_aliases(alias, callback, data, include_overwritable)) return true; } return false; } #define FOR_EACH_FUNCTION_WITH_GIMPLE_BODY(node) \ for ((node) = cgraph_first_function_with_gimple_body(); (node); \ (node) = cgraph_next_function_with_gimple_body(node)) static inline void varpool_add_new_variable(tree decl) { varpool_finalize_decl(decl); } #endif #if BUILDING_GCC_VERSION <= 4007 #define FOR_EACH_FUNCTION(node) \ for (node = cgraph_nodes; node; node = node->next) #define FOR_EACH_VARIABLE(node) \ for (node = varpool_nodes; node; node = node->next) #define PROP_loops 0 #define NODE_SYMBOL(node) (node) #define NODE_DECL(node) (node)->decl #define INSN_LOCATION(INSN) RTL_LOCATION(INSN) #define vNULL NULL static inline int bb_loop_depth(const_basic_block bb) { return bb->loop_father ? loop_depth(bb->loop_father) : 0; } static inline bool gimple_store_p(gimple gs) { tree lhs = gimple_get_lhs(gs); return lhs && !is_gimple_reg(lhs); } static inline void gimple_init_singleton(gimple g __unused) { } #endif #if BUILDING_GCC_VERSION == 4007 || BUILDING_GCC_VERSION == 4008 static inline struct cgraph_node *cgraph_alias_target(struct cgraph_node *n) { return cgraph_alias_aliased_node(n); } #endif #if BUILDING_GCC_VERSION >= 4007 && BUILDING_GCC_VERSION <= 4009 #define cgraph_create_edge(caller, callee, call_stmt, count, freq, nest) \ cgraph_create_edge((caller), (callee), (call_stmt), (count), (freq)) #define cgraph_create_edge_including_clones(caller, callee, old_call_stmt, call_stmt, count, freq, nest, reason) \ cgraph_create_edge_including_clones((caller), (callee), (old_call_stmt), (call_stmt), (count), (freq), (reason)) #endif #if BUILDING_GCC_VERSION <= 4008 #define ENTRY_BLOCK_PTR_FOR_FN(FN) ENTRY_BLOCK_PTR_FOR_FUNCTION(FN) #define EXIT_BLOCK_PTR_FOR_FN(FN) EXIT_BLOCK_PTR_FOR_FUNCTION(FN) #define basic_block_info_for_fn(FN) ((FN)->cfg->x_basic_block_info) #define n_basic_blocks_for_fn(FN) ((FN)->cfg->x_n_basic_blocks) #define n_edges_for_fn(FN) ((FN)->cfg->x_n_edges) #define last_basic_block_for_fn(FN) ((FN)->cfg->x_last_basic_block) #define label_to_block_map_for_fn(FN) ((FN)->cfg->x_label_to_block_map) #define profile_status_for_fn(FN) ((FN)->cfg->x_profile_status) #define BASIC_BLOCK_FOR_FN(FN, N) BASIC_BLOCK_FOR_FUNCTION((FN), (N)) #define NODE_IMPLICIT_ALIAS(node) (node)->same_body_alias #define VAR_P(NODE) (TREE_CODE(NODE) == VAR_DECL) static inline bool tree_fits_shwi_p(const_tree t) { if (t == NULL_TREE || TREE_CODE(t) != INTEGER_CST) return false; if (TREE_INT_CST_HIGH(t) == 0 && (HOST_WIDE_INT)TREE_INT_CST_LOW(t) >= 0) return true; if (TREE_INT_CST_HIGH(t) == -1 && (HOST_WIDE_INT)TREE_INT_CST_LOW(t) < 0 && !TYPE_UNSIGNED(TREE_TYPE(t))) return true; return false; } static inline bool tree_fits_uhwi_p(const_tree t) { if (t == NULL_TREE || TREE_CODE(t) != INTEGER_CST) return false; return TREE_INT_CST_HIGH(t) == 0; } static inline HOST_WIDE_INT tree_to_shwi(const_tree t) { gcc_assert(tree_fits_shwi_p(t)); return TREE_INT_CST_LOW(t); } static inline unsigned HOST_WIDE_INT tree_to_uhwi(const_tree t) { gcc_assert(tree_fits_uhwi_p(t)); return TREE_INT_CST_LOW(t); } static inline const char *get_tree_code_name(enum tree_code code) { gcc_assert(code < MAX_TREE_CODES); return tree_code_name[code]; } #define ipa_remove_stmt_references(cnode, stmt) typedef union gimple_statement_d gasm; typedef union gimple_statement_d gassign; typedef union gimple_statement_d gcall; typedef union gimple_statement_d gcond; typedef union gimple_statement_d gdebug; typedef union gimple_statement_d ggoto; typedef union gimple_statement_d gphi; typedef union gimple_statement_d greturn; static inline gasm *as_a_gasm(gimple stmt) { return stmt; } static inline const gasm *as_a_const_gasm(const_gimple stmt) { return stmt; } static inline gassign *as_a_gassign(gimple stmt) { return stmt; } static inline const gassign *as_a_const_gassign(const_gimple stmt) { return stmt; } static inline gcall *as_a_gcall(gimple stmt) { return stmt; } static inline const gcall *as_a_const_gcall(const_gimple stmt) { return stmt; } static inline gcond *as_a_gcond(gimple stmt) { return stmt; } static inline const gcond *as_a_const_gcond(const_gimple stmt) { return stmt; } static inline gdebug *as_a_gdebug(gimple stmt) { return stmt; } static inline const gdebug *as_a_const_gdebug(const_gimple stmt) { return stmt; } static inline ggoto *as_a_ggoto(gimple stmt) { return stmt; } static inline const ggoto *as_a_const_ggoto(const_gimple stmt) { return stmt; } static inline gphi *as_a_gphi(gimple stmt) { return stmt; } static inline const gphi *as_a_const_gphi(const_gimple stmt) { return stmt; } static inline greturn *as_a_greturn(gimple stmt) { return stmt; } static inline const greturn *as_a_const_greturn(const_gimple stmt) { return stmt; } #endif #if BUILDING_GCC_VERSION == 4008 #define NODE_SYMBOL(node) (&(node)->symbol) #define NODE_DECL(node) (node)->symbol.decl #endif #if BUILDING_GCC_VERSION >= 4008 #define add_referenced_var(var) #define mark_sym_for_renaming(var) #define varpool_mark_needed_node(node) #define create_var_ann(var) #define TODO_dump_func 0 #define TODO_dump_cgraph 0 #endif #if BUILDING_GCC_VERSION <= 4009 #define TODO_verify_il 0 #define AVAIL_INTERPOSABLE AVAIL_OVERWRITABLE #define section_name_prefix LTO_SECTION_NAME_PREFIX #define fatal_error(loc, gmsgid, ...) fatal_error((gmsgid), __VA_ARGS__) rtx emit_move_insn(rtx x, rtx y); typedef struct rtx_def rtx_insn; static inline const char *get_decl_section_name(const_tree decl) { if (DECL_SECTION_NAME(decl) == NULL_TREE) return NULL; return TREE_STRING_POINTER(DECL_SECTION_NAME(decl)); } static inline void set_decl_section_name(tree node, const char *value) { if (value) DECL_SECTION_NAME(node) = build_string(strlen(value) + 1, value); else DECL_SECTION_NAME(node) = NULL; } #endif #if BUILDING_GCC_VERSION == 4009 typedef struct gimple_statement_asm gasm; typedef struct gimple_statement_base gassign; typedef struct gimple_statement_call gcall; typedef struct gimple_statement_base gcond; typedef struct gimple_statement_base gdebug; typedef struct gimple_statement_base ggoto; typedef struct gimple_statement_phi gphi; typedef struct gimple_statement_base greturn; static inline gasm *as_a_gasm(gimple stmt) { return as_a(stmt); } static inline const gasm *as_a_const_gasm(const_gimple stmt) { return as_a(stmt); } static inline gassign *as_a_gassign(gimple stmt) { return stmt; } static inline const gassign *as_a_const_gassign(const_gimple stmt) { return stmt; } static inline gcall *as_a_gcall(gimple stmt) { return as_a(stmt); } static inline const gcall *as_a_const_gcall(const_gimple stmt) { return as_a(stmt); } static inline gcond *as_a_gcond(gimple stmt) { return stmt; } static inline const gcond *as_a_const_gcond(const_gimple stmt) { return stmt; } static inline gdebug *as_a_gdebug(gimple stmt) { return stmt; } static inline const gdebug *as_a_const_gdebug(const_gimple stmt) { return stmt; } static inline ggoto *as_a_ggoto(gimple stmt) { return stmt; } static inline const ggoto *as_a_const_ggoto(const_gimple stmt) { return stmt; } static inline gphi *as_a_gphi(gimple stmt) { return as_a(stmt); } static inline const gphi *as_a_const_gphi(const_gimple stmt) { return as_a(stmt); } static inline greturn *as_a_greturn(gimple stmt) { return stmt; } static inline const greturn *as_a_const_greturn(const_gimple stmt) { return stmt; } #endif #if BUILDING_GCC_VERSION >= 4009 #define TODO_ggc_collect 0 #define NODE_SYMBOL(node) (node) #define NODE_DECL(node) (node)->decl #define cgraph_node_name(node) (node)->name() #define NODE_IMPLICIT_ALIAS(node) (node)->cpp_implicit_alias static inline opt_pass *get_pass_for_id(int id) { return g->get_passes()->get_pass_for_id(id); } #endif #if BUILDING_GCC_VERSION >= 5000 && BUILDING_GCC_VERSION < 6000 /* gimple related */ template <> template <> inline bool is_a_helper::test(const_gimple gs) { return gs->code == GIMPLE_ASSIGN; } #endif #if BUILDING_GCC_VERSION >= 5000 #define TODO_verify_ssa TODO_verify_il #define TODO_verify_flow TODO_verify_il #define TODO_verify_stmts TODO_verify_il #define TODO_verify_rtl_sharing TODO_verify_il #define INSN_DELETED_P(insn) (insn)->deleted() static inline const char *get_decl_section_name(const_tree decl) { return DECL_SECTION_NAME(decl); } /* symtab/cgraph related */ #define debug_cgraph_node(node) (node)->debug() #define cgraph_get_node(decl) cgraph_node::get(decl) #define cgraph_get_create_node(decl) cgraph_node::get_create(decl) #define cgraph_create_node(decl) cgraph_node::create(decl) #define cgraph_n_nodes symtab->cgraph_count #define cgraph_max_uid symtab->cgraph_max_uid #define varpool_get_node(decl) varpool_node::get(decl) #define dump_varpool_node(file, node) (node)->dump(file) #define cgraph_create_edge(caller, callee, call_stmt, count, freq, nest) \ (caller)->create_edge((callee), (call_stmt), (count), (freq)) #define cgraph_create_edge_including_clones(caller, callee, old_call_stmt, call_stmt, count, freq, nest, reason) \ (caller)->create_edge_including_clones((callee), (old_call_stmt), (call_stmt), (count), (freq), (reason)) typedef struct cgraph_node *cgraph_node_ptr; typedef struct cgraph_edge *cgraph_edge_p; typedef struct varpool_node *varpool_node_ptr; static inline void change_decl_assembler_name(tree decl, tree name) { symtab->change_decl_assembler_name(decl, name); } static inline void varpool_finalize_decl(tree decl) { varpool_node::finalize_decl(decl); } static inline void varpool_add_new_variable(tree decl) { varpool_node::add(decl); } static inline unsigned int rebuild_cgraph_edges(void) { return cgraph_edge::rebuild_edges(); } static inline cgraph_node_ptr cgraph_function_node(cgraph_node_ptr node, enum availability *availability) { return node->function_symbol(availability); } static inline cgraph_node_ptr cgraph_function_or_thunk_node(cgraph_node_ptr node, enum availability *availability = NULL) { return node->ultimate_alias_target(availability); } static inline bool cgraph_only_called_directly_p(cgraph_node_ptr node) { return node->only_called_directly_p(); } static inline enum availability cgraph_function_body_availability(cgraph_node_ptr node) { return node->get_availability(); } static inline cgraph_node_ptr cgraph_alias_target(cgraph_node_ptr node) { return node->get_alias_target(); } static inline bool cgraph_for_node_and_aliases(cgraph_node_ptr node, bool (*callback)(cgraph_node_ptr, void *), void *data, bool include_overwritable) { return node->call_for_symbol_thunks_and_aliases(callback, data, include_overwritable); } static inline struct cgraph_node_hook_list *cgraph_add_function_insertion_hook(cgraph_node_hook hook, void *data) { return symtab->add_cgraph_insertion_hook(hook, data); } static inline void cgraph_remove_function_insertion_hook(struct cgraph_node_hook_list *entry) { symtab->remove_cgraph_insertion_hook(entry); } static inline struct cgraph_node_hook_list *cgraph_add_node_removal_hook(cgraph_node_hook hook, void *data) { return symtab->add_cgraph_removal_hook(hook, data); } static inline void cgraph_remove_node_removal_hook(struct cgraph_node_hook_list *entry) { symtab->remove_cgraph_removal_hook(entry); } static inline struct cgraph_2node_hook_list *cgraph_add_node_duplication_hook(cgraph_2node_hook hook, void *data) { return symtab->add_cgraph_duplication_hook(hook, data); } static inline void cgraph_remove_node_duplication_hook(struct cgraph_2node_hook_list *entry) { symtab->remove_cgraph_duplication_hook(entry); } static inline void cgraph_call_node_duplication_hooks(cgraph_node_ptr node, cgraph_node_ptr node2) { symtab->call_cgraph_duplication_hooks(node, node2); } static inline void cgraph_call_edge_duplication_hooks(cgraph_edge *cs1, cgraph_edge *cs2) { symtab->call_edge_duplication_hooks(cs1, cs2); } #if BUILDING_GCC_VERSION >= 6000 typedef gimple *gimple_ptr; typedef const gimple *const_gimple_ptr; #define gimple gimple_ptr #define const_gimple const_gimple_ptr #undef CONST_CAST_GIMPLE #define CONST_CAST_GIMPLE(X) CONST_CAST(gimple, (X)) #endif /* gimple related */ static inline gimple gimple_build_assign_with_ops(enum tree_code subcode, tree lhs, tree op1, tree op2 MEM_STAT_DECL) { return gimple_build_assign(lhs, subcode, op1, op2 PASS_MEM_STAT); } template <> template <> inline bool is_a_helper::test(const_gimple gs) { return gs->code == GIMPLE_GOTO; } template <> template <> inline bool is_a_helper::test(const_gimple gs) { return gs->code == GIMPLE_RETURN; } static inline gasm *as_a_gasm(gimple stmt) { return as_a(stmt); } static inline const gasm *as_a_const_gasm(const_gimple stmt) { return as_a(stmt); } static inline gassign *as_a_gassign(gimple stmt) { return as_a(stmt); } static inline const gassign *as_a_const_gassign(const_gimple stmt) { return as_a(stmt); } static inline gcall *as_a_gcall(gimple stmt) { return as_a(stmt); } static inline const gcall *as_a_const_gcall(const_gimple stmt) { return as_a(stmt); } static inline ggoto *as_a_ggoto(gimple stmt) { return as_a(stmt); } static inline const ggoto *as_a_const_ggoto(const_gimple stmt) { return as_a(stmt); } static inline gphi *as_a_gphi(gimple stmt) { return as_a(stmt); } static inline const gphi *as_a_const_gphi(const_gimple stmt) { return as_a(stmt); } static inline greturn *as_a_greturn(gimple stmt) { return as_a(stmt); } static inline const greturn *as_a_const_greturn(const_gimple stmt) { return as_a(stmt); } /* IPA/LTO related */ #define ipa_ref_list_referring_iterate(L, I, P) \ (L)->referring.iterate((I), &(P)) #define ipa_ref_list_reference_iterate(L, I, P) \ (L)->reference.iterate((I), &(P)) static inline cgraph_node_ptr ipa_ref_referring_node(struct ipa_ref *ref) { return dyn_cast(ref->referring); } static inline void ipa_remove_stmt_references(symtab_node *referring_node, gimple stmt) { referring_node->remove_stmt_references(stmt); } #endif #if BUILDING_GCC_VERSION < 6000 #define get_inner_reference(exp, pbitsize, pbitpos, poffset, pmode, punsignedp, preversep, pvolatilep, keep_aligning) \ get_inner_reference(exp, pbitsize, pbitpos, poffset, pmode, punsignedp, pvolatilep, keep_aligning) #define gen_rtx_set(ARG0, ARG1) gen_rtx_SET(VOIDmode, (ARG0), (ARG1)) #endif #if BUILDING_GCC_VERSION >= 6000 #define gen_rtx_set(ARG0, ARG1) gen_rtx_SET((ARG0), (ARG1)) #endif #ifdef __cplusplus static inline void debug_tree(const_tree t) { debug_tree(CONST_CAST_TREE(t)); } static inline void debug_gimple_stmt(const_gimple s) { debug_gimple_stmt(CONST_CAST_GIMPLE(s)); } #else #define debug_tree(t) debug_tree(CONST_CAST_TREE(t)) #define debug_gimple_stmt(s) debug_gimple_stmt(CONST_CAST_GIMPLE(s)) #endif #if BUILDING_GCC_VERSION >= 7000 #define get_inner_reference(exp, pbitsize, pbitpos, poffset, pmode, punsignedp, preversep, pvolatilep, keep_aligning) \ get_inner_reference(exp, pbitsize, pbitpos, poffset, pmode, punsignedp, preversep, pvolatilep) #endif #if BUILDING_GCC_VERSION < 7000 #define SET_DECL_ALIGN(decl, align) DECL_ALIGN(decl) = (align) #define SET_DECL_MODE(decl, mode) DECL_MODE(decl) = (mode) #endif #endif kpatch-0.5.0/kpatch-build/gcc-plugins/gcc-generate-rtl-pass.h000066400000000000000000000101021321664017000240020ustar00rootroot00000000000000/* SPDX-License-Identifier: GPL-2.0 */ /* * Generator for RTL pass related boilerplate code/data * * Supports gcc 4.5-6 * * Usage: * * 1. before inclusion define PASS_NAME * 2. before inclusion define NO_* for unimplemented callbacks * NO_GATE * NO_EXECUTE * 3. before inclusion define PROPERTIES_* and TODO_FLAGS_* to override * the default 0 values * 4. for convenience, all the above will be undefined after inclusion! * 5. the only exported name is make_PASS_NAME_pass() to register with gcc */ #ifndef PASS_NAME #error at least PASS_NAME must be defined #else #define __GCC_PLUGIN_STRINGIFY(n) #n #define _GCC_PLUGIN_STRINGIFY(n) __GCC_PLUGIN_STRINGIFY(n) #define _GCC_PLUGIN_CONCAT2(x, y) x ## y #define _GCC_PLUGIN_CONCAT3(x, y, z) x ## y ## z #define __PASS_NAME_PASS_DATA(n) _GCC_PLUGIN_CONCAT2(n, _pass_data) #define _PASS_NAME_PASS_DATA __PASS_NAME_PASS_DATA(PASS_NAME) #define __PASS_NAME_PASS(n) _GCC_PLUGIN_CONCAT2(n, _pass) #define _PASS_NAME_PASS __PASS_NAME_PASS(PASS_NAME) #define _PASS_NAME_NAME _GCC_PLUGIN_STRINGIFY(PASS_NAME) #define __MAKE_PASS_NAME_PASS(n) _GCC_PLUGIN_CONCAT3(make_, n, _pass) #define _MAKE_PASS_NAME_PASS __MAKE_PASS_NAME_PASS(PASS_NAME) #ifdef NO_GATE #define _GATE NULL #define _HAS_GATE false #else #define __GATE(n) _GCC_PLUGIN_CONCAT2(n, _gate) #define _GATE __GATE(PASS_NAME) #define _HAS_GATE true #endif #ifdef NO_EXECUTE #define _EXECUTE NULL #define _HAS_EXECUTE false #else #define __EXECUTE(n) _GCC_PLUGIN_CONCAT2(n, _execute) #define _EXECUTE __EXECUTE(PASS_NAME) #define _HAS_EXECUTE true #endif #ifndef PROPERTIES_REQUIRED #define PROPERTIES_REQUIRED 0 #endif #ifndef PROPERTIES_PROVIDED #define PROPERTIES_PROVIDED 0 #endif #ifndef PROPERTIES_DESTROYED #define PROPERTIES_DESTROYED 0 #endif #ifndef TODO_FLAGS_START #define TODO_FLAGS_START 0 #endif #ifndef TODO_FLAGS_FINISH #define TODO_FLAGS_FINISH 0 #endif #if BUILDING_GCC_VERSION >= 4009 namespace { static const pass_data _PASS_NAME_PASS_DATA = { #else static struct rtl_opt_pass _PASS_NAME_PASS = { .pass = { #endif .type = RTL_PASS, .name = _PASS_NAME_NAME, #if BUILDING_GCC_VERSION >= 4008 .optinfo_flags = OPTGROUP_NONE, #endif #if BUILDING_GCC_VERSION >= 5000 #elif BUILDING_GCC_VERSION == 4009 .has_gate = _HAS_GATE, .has_execute = _HAS_EXECUTE, #else .gate = _GATE, .execute = _EXECUTE, .sub = NULL, .next = NULL, .static_pass_number = 0, #endif .tv_id = TV_NONE, .properties_required = PROPERTIES_REQUIRED, .properties_provided = PROPERTIES_PROVIDED, .properties_destroyed = PROPERTIES_DESTROYED, .todo_flags_start = TODO_FLAGS_START, .todo_flags_finish = TODO_FLAGS_FINISH, #if BUILDING_GCC_VERSION < 4009 } #endif }; #if BUILDING_GCC_VERSION >= 4009 class _PASS_NAME_PASS : public rtl_opt_pass { public: _PASS_NAME_PASS() : rtl_opt_pass(_PASS_NAME_PASS_DATA, g) {} #ifndef NO_GATE #if BUILDING_GCC_VERSION >= 5000 virtual bool gate(function *) { return _GATE(); } #else virtual bool gate(void) { return _GATE(); } #endif #endif virtual opt_pass *clone() { return new _PASS_NAME_PASS(); } #ifndef NO_EXECUTE #if BUILDING_GCC_VERSION >= 5000 virtual unsigned int execute(function *) { return _EXECUTE(); } #else virtual unsigned int execute(void) { return _EXECUTE(); } #endif #endif }; } opt_pass *_MAKE_PASS_NAME_PASS(void) { return new _PASS_NAME_PASS(); } #else struct opt_pass *_MAKE_PASS_NAME_PASS(void) { return &_PASS_NAME_PASS.pass; } #endif /* clean up user provided defines */ #undef PASS_NAME #undef NO_GATE #undef NO_EXECUTE #undef PROPERTIES_DESTROYED #undef PROPERTIES_PROVIDED #undef PROPERTIES_REQUIRED #undef TODO_FLAGS_FINISH #undef TODO_FLAGS_START /* clean up generated defines */ #undef _EXECUTE #undef __EXECUTE #undef _GATE #undef __GATE #undef _GCC_PLUGIN_CONCAT2 #undef _GCC_PLUGIN_CONCAT3 #undef _GCC_PLUGIN_STRINGIFY #undef __GCC_PLUGIN_STRINGIFY #undef _HAS_EXECUTE #undef _HAS_GATE #undef _MAKE_PASS_NAME_PASS #undef __MAKE_PASS_NAME_PASS #undef _PASS_NAME_NAME #undef _PASS_NAME_PASS #undef __PASS_NAME_PASS #undef _PASS_NAME_PASS_DATA #undef __PASS_NAME_PASS_DATA #endif /* PASS_NAME */ kpatch-0.5.0/kpatch-build/gcc-plugins/ppc64le-plugin.c000066400000000000000000000043331321664017000224700ustar00rootroot00000000000000#include "gcc-common.h" #include #define PLUGIN_NAME "ppc64le-plugin" int plugin_is_GPL_compatible; struct plugin_info plugin_info = { .version = "1", .help = PLUGIN_NAME ": insert nops after local calls\n", }; static unsigned int ppc64le_plugin_execute(void) { rtx_insn *insn; int code; const char *name; static int nonlocal_code = -1, local_code = -1, value_nonlocal_code = -1, value_local_code = -1; static bool initialized = false; if (initialized) goto found; /* Find the rs6000.md code numbers for local and non-local calls */ initialized = true; for (code = 0; code < 1000; code++) { name = get_insn_name(code); if (!name) continue; if (!strcmp(name , "*call_local_aixdi")) local_code = code; else if (!strcmp(name , "*call_nonlocal_aixdi")) nonlocal_code = code; else if (!strcmp(name, "*call_value_local_aixdi")) value_local_code = code; else if (!strcmp(name, "*call_value_nonlocal_aixdi")) value_nonlocal_code = code; if (nonlocal_code != -1 && local_code != -1 && value_nonlocal_code != -1 && value_local_code != -1) goto found; } found: if (nonlocal_code == -1 || local_code == -1 || value_nonlocal_code == -1 || value_local_code == -1) { fprintf(stderr, PLUGIN_NAME ": can't find call instruction codes"); return 1; } /* Convert local calls to non-local */ for (insn = get_insns(); insn; insn = NEXT_INSN(insn)) { if (GET_CODE(insn) == CALL_INSN) { if (INSN_CODE(insn) == local_code) INSN_CODE(insn) = nonlocal_code; else if (INSN_CODE(insn) == value_local_code) INSN_CODE(insn) = value_nonlocal_code; } } return 0; } #define PASS_NAME ppc64le_plugin #define NO_GATE #include "gcc-generate-rtl-pass.h" int plugin_init(struct plugin_name_args *plugin_info, struct plugin_gcc_version *version) { const char * const plugin_name = plugin_info->base_name; PASS_INFO(ppc64le_plugin, "dwarf2", 1, PASS_POS_INSERT_BEFORE); if (!plugin_default_version_check(version, &gcc_version)) error(1, 0, PLUGIN_NAME ": incompatible gcc/plugin versions"); register_callback(plugin_name, PLUGIN_INFO, NULL, &plugin_info); register_callback(plugin_name, PLUGIN_PASS_MANAGER_SETUP, NULL, &ppc64le_plugin_pass_info); return 0; } kpatch-0.5.0/kpatch-build/insn/000077500000000000000000000000001321664017000163045ustar00rootroot00000000000000kpatch-0.5.0/kpatch-build/insn/asm/000077500000000000000000000000001321664017000170645ustar00rootroot00000000000000kpatch-0.5.0/kpatch-build/insn/asm/inat.h000066400000000000000000000137641321664017000202030ustar00rootroot00000000000000#ifndef _ASM_X86_INAT_H #define _ASM_X86_INAT_H /* * x86 instruction attributes * * Written by Masami Hiramatsu * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #include /* * Internal bits. Don't use bitmasks directly, because these bits are * unstable. You should use checking functions. */ #define INAT_OPCODE_TABLE_SIZE 256 #define INAT_GROUP_TABLE_SIZE 8 /* Legacy last prefixes */ #define INAT_PFX_OPNDSZ 1 /* 0x66 */ /* LPFX1 */ #define INAT_PFX_REPE 2 /* 0xF3 */ /* LPFX2 */ #define INAT_PFX_REPNE 3 /* 0xF2 */ /* LPFX3 */ /* Other Legacy prefixes */ #define INAT_PFX_LOCK 4 /* 0xF0 */ #define INAT_PFX_CS 5 /* 0x2E */ #define INAT_PFX_DS 6 /* 0x3E */ #define INAT_PFX_ES 7 /* 0x26 */ #define INAT_PFX_FS 8 /* 0x64 */ #define INAT_PFX_GS 9 /* 0x65 */ #define INAT_PFX_SS 10 /* 0x36 */ #define INAT_PFX_ADDRSZ 11 /* 0x67 */ /* x86-64 REX prefix */ #define INAT_PFX_REX 12 /* 0x4X */ /* AVX VEX prefixes */ #define INAT_PFX_VEX2 13 /* 2-bytes VEX prefix */ #define INAT_PFX_VEX3 14 /* 3-bytes VEX prefix */ #define INAT_LSTPFX_MAX 3 #define INAT_LGCPFX_MAX 11 /* Immediate size */ #define INAT_IMM_BYTE 1 #define INAT_IMM_WORD 2 #define INAT_IMM_DWORD 3 #define INAT_IMM_QWORD 4 #define INAT_IMM_PTR 5 #define INAT_IMM_VWORD32 6 #define INAT_IMM_VWORD 7 /* Legacy prefix */ #define INAT_PFX_OFFS 0 #define INAT_PFX_BITS 4 #define INAT_PFX_MAX ((1 << INAT_PFX_BITS) - 1) #define INAT_PFX_MASK (INAT_PFX_MAX << INAT_PFX_OFFS) /* Escape opcodes */ #define INAT_ESC_OFFS (INAT_PFX_OFFS + INAT_PFX_BITS) #define INAT_ESC_BITS 2 #define INAT_ESC_MAX ((1 << INAT_ESC_BITS) - 1) #define INAT_ESC_MASK (INAT_ESC_MAX << INAT_ESC_OFFS) /* Group opcodes (1-16) */ #define INAT_GRP_OFFS (INAT_ESC_OFFS + INAT_ESC_BITS) #define INAT_GRP_BITS 5 #define INAT_GRP_MAX ((1 << INAT_GRP_BITS) - 1) #define INAT_GRP_MASK (INAT_GRP_MAX << INAT_GRP_OFFS) /* Immediates */ #define INAT_IMM_OFFS (INAT_GRP_OFFS + INAT_GRP_BITS) #define INAT_IMM_BITS 3 #define INAT_IMM_MASK (((1 << INAT_IMM_BITS) - 1) << INAT_IMM_OFFS) /* Flags */ #define INAT_FLAG_OFFS (INAT_IMM_OFFS + INAT_IMM_BITS) #define INAT_MODRM (1 << (INAT_FLAG_OFFS)) #define INAT_FORCE64 (1 << (INAT_FLAG_OFFS + 1)) #define INAT_SCNDIMM (1 << (INAT_FLAG_OFFS + 2)) #define INAT_MOFFSET (1 << (INAT_FLAG_OFFS + 3)) #define INAT_VARIANT (1 << (INAT_FLAG_OFFS + 4)) #define INAT_VEXOK (1 << (INAT_FLAG_OFFS + 5)) #define INAT_VEXONLY (1 << (INAT_FLAG_OFFS + 6)) /* Attribute making macros for attribute tables */ #define INAT_MAKE_PREFIX(pfx) (pfx << INAT_PFX_OFFS) #define INAT_MAKE_ESCAPE(esc) (esc << INAT_ESC_OFFS) #define INAT_MAKE_GROUP(grp) ((grp << INAT_GRP_OFFS) | INAT_MODRM) #define INAT_MAKE_IMM(imm) (imm << INAT_IMM_OFFS) /* Attribute search APIs */ extern insn_attr_t inat_get_opcode_attribute(insn_byte_t opcode); extern int inat_get_last_prefix_id(insn_byte_t last_pfx); extern insn_attr_t inat_get_escape_attribute(insn_byte_t opcode, int lpfx_id, insn_attr_t esc_attr); extern insn_attr_t inat_get_group_attribute(insn_byte_t modrm, int lpfx_id, insn_attr_t esc_attr); extern insn_attr_t inat_get_avx_attribute(insn_byte_t opcode, insn_byte_t vex_m, insn_byte_t vex_pp); /* Attribute checking functions */ static inline int inat_is_legacy_prefix(insn_attr_t attr) { attr &= INAT_PFX_MASK; return attr && attr <= INAT_LGCPFX_MAX; } static inline int inat_is_address_size_prefix(insn_attr_t attr) { return (attr & INAT_PFX_MASK) == INAT_PFX_ADDRSZ; } static inline int inat_is_operand_size_prefix(insn_attr_t attr) { return (attr & INAT_PFX_MASK) == INAT_PFX_OPNDSZ; } static inline int inat_is_rex_prefix(insn_attr_t attr) { return (attr & INAT_PFX_MASK) == INAT_PFX_REX; } static inline int inat_last_prefix_id(insn_attr_t attr) { if ((attr & INAT_PFX_MASK) > INAT_LSTPFX_MAX) return 0; else return attr & INAT_PFX_MASK; } static inline int inat_is_vex_prefix(insn_attr_t attr) { attr &= INAT_PFX_MASK; return attr == INAT_PFX_VEX2 || attr == INAT_PFX_VEX3; } static inline int inat_is_vex3_prefix(insn_attr_t attr) { return (attr & INAT_PFX_MASK) == INAT_PFX_VEX3; } static inline int inat_is_escape(insn_attr_t attr) { return attr & INAT_ESC_MASK; } static inline int inat_escape_id(insn_attr_t attr) { return (attr & INAT_ESC_MASK) >> INAT_ESC_OFFS; } static inline int inat_is_group(insn_attr_t attr) { return attr & INAT_GRP_MASK; } static inline int inat_group_id(insn_attr_t attr) { return (attr & INAT_GRP_MASK) >> INAT_GRP_OFFS; } static inline int inat_group_common_attribute(insn_attr_t attr) { return attr & ~INAT_GRP_MASK; } static inline int inat_has_immediate(insn_attr_t attr) { return attr & INAT_IMM_MASK; } static inline int inat_immediate_size(insn_attr_t attr) { return (attr & INAT_IMM_MASK) >> INAT_IMM_OFFS; } static inline int inat_has_modrm(insn_attr_t attr) { return attr & INAT_MODRM; } static inline int inat_is_force64(insn_attr_t attr) { return attr & INAT_FORCE64; } static inline int inat_has_second_immediate(insn_attr_t attr) { return attr & INAT_SCNDIMM; } static inline int inat_has_moffset(insn_attr_t attr) { return attr & INAT_MOFFSET; } static inline int inat_has_variant(insn_attr_t attr) { return attr & INAT_VARIANT; } static inline int inat_accept_vex(insn_attr_t attr) { return attr & INAT_VEXOK; } static inline int inat_must_vex(insn_attr_t attr) { return attr & INAT_VEXONLY; } #endif kpatch-0.5.0/kpatch-build/insn/asm/inat_types.h000066400000000000000000000017651321664017000214250ustar00rootroot00000000000000#ifndef _ASM_X86_INAT_TYPES_H #define _ASM_X86_INAT_TYPES_H /* * x86 instruction attributes * * Written by Masami Hiramatsu * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ /* Instruction attributes */ typedef unsigned int insn_attr_t; typedef unsigned char insn_byte_t; typedef signed int insn_value_t; #endif kpatch-0.5.0/kpatch-build/insn/asm/insn.h000066400000000000000000000135371321664017000202150ustar00rootroot00000000000000#ifndef _ASM_X86_INSN_H #define _ASM_X86_INSN_H /* * x86 instruction analysis * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * * Copyright (C) IBM Corporation, 2009 */ /* insn_attr_t is defined in inat.h */ #include struct insn_field { union { insn_value_t value; insn_byte_t bytes[4]; }; /* !0 if we've run insn_get_xxx() for this field */ unsigned char got; unsigned char nbytes; }; struct insn { struct insn_field prefixes; /* * Prefixes * prefixes.bytes[3]: last prefix */ struct insn_field rex_prefix; /* REX prefix */ struct insn_field vex_prefix; /* VEX prefix */ struct insn_field opcode; /* * opcode.bytes[0]: opcode1 * opcode.bytes[1]: opcode2 * opcode.bytes[2]: opcode3 */ struct insn_field modrm; struct insn_field sib; struct insn_field displacement; union { struct insn_field immediate; struct insn_field moffset1; /* for 64bit MOV */ struct insn_field immediate1; /* for 64bit imm or off16/32 */ }; union { struct insn_field moffset2; /* for 64bit MOV */ struct insn_field immediate2; /* for 64bit imm or seg16 */ }; insn_attr_t attr; unsigned char opnd_bytes; unsigned char addr_bytes; unsigned char length; unsigned char x86_64; const insn_byte_t *kaddr; /* kernel address of insn to analyze */ const insn_byte_t *next_byte; }; #define MAX_INSN_SIZE 16 #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6) #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3) #define X86_MODRM_RM(modrm) ((modrm) & 0x07) #define X86_SIB_SCALE(sib) (((sib) & 0xc0) >> 6) #define X86_SIB_INDEX(sib) (((sib) & 0x38) >> 3) #define X86_SIB_BASE(sib) ((sib) & 0x07) #define X86_REX_W(rex) ((rex) & 8) #define X86_REX_R(rex) ((rex) & 4) #define X86_REX_X(rex) ((rex) & 2) #define X86_REX_B(rex) ((rex) & 1) /* VEX bit flags */ #define X86_VEX_W(vex) ((vex) & 0x80) /* VEX3 Byte2 */ #define X86_VEX_R(vex) ((vex) & 0x80) /* VEX2/3 Byte1 */ #define X86_VEX_X(vex) ((vex) & 0x40) /* VEX3 Byte1 */ #define X86_VEX_B(vex) ((vex) & 0x20) /* VEX3 Byte1 */ #define X86_VEX_L(vex) ((vex) & 0x04) /* VEX3 Byte2, VEX2 Byte1 */ /* VEX bit fields */ #define X86_VEX3_M(vex) ((vex) & 0x1f) /* VEX3 Byte1 */ #define X86_VEX2_M 1 /* VEX2.M always 1 */ #define X86_VEX_V(vex) (((vex) & 0x78) >> 3) /* VEX3 Byte2, VEX2 Byte1 */ #define X86_VEX_P(vex) ((vex) & 0x03) /* VEX3 Byte2, VEX2 Byte1 */ #define X86_VEX_M_MAX 0x1f /* VEX3.M Maximum value */ extern void insn_init(struct insn *insn, const void *kaddr, int x86_64); extern void insn_get_prefixes(struct insn *insn); extern void insn_get_opcode(struct insn *insn); extern void insn_get_modrm(struct insn *insn); extern void insn_get_sib(struct insn *insn); extern void insn_get_displacement(struct insn *insn); extern void insn_get_immediate(struct insn *insn); extern void insn_get_length(struct insn *insn); /* Attribute will be determined after getting ModRM (for opcode groups) */ static inline void insn_get_attribute(struct insn *insn) { insn_get_modrm(insn); } /* Instruction uses RIP-relative addressing */ extern int insn_rip_relative(struct insn *insn); /* Init insn for kernel text */ static inline void kernel_insn_init(struct insn *insn, const void *kaddr) { #ifdef CONFIG_X86_64 insn_init(insn, kaddr, 1); #else /* CONFIG_X86_32 */ insn_init(insn, kaddr, 0); #endif } static inline int insn_is_avx(struct insn *insn) { if (!insn->prefixes.got) insn_get_prefixes(insn); return (insn->vex_prefix.value != 0); } /* Ensure this instruction is decoded completely */ static inline int insn_complete(struct insn *insn) { return insn->opcode.got && insn->modrm.got && insn->sib.got && insn->displacement.got && insn->immediate.got; } static inline insn_byte_t insn_vex_m_bits(struct insn *insn) { if (insn->vex_prefix.nbytes == 2) /* 2 bytes VEX */ return X86_VEX2_M; else return X86_VEX3_M(insn->vex_prefix.bytes[1]); } static inline insn_byte_t insn_vex_p_bits(struct insn *insn) { if (insn->vex_prefix.nbytes == 2) /* 2 bytes VEX */ return X86_VEX_P(insn->vex_prefix.bytes[1]); else return X86_VEX_P(insn->vex_prefix.bytes[2]); } /* Get the last prefix id from last prefix or VEX prefix */ static inline int insn_last_prefix_id(struct insn *insn) { if (insn_is_avx(insn)) return insn_vex_p_bits(insn); /* VEX_p is a SIMD prefix id */ if (insn->prefixes.bytes[3]) return inat_get_last_prefix_id(insn->prefixes.bytes[3]); return 0; } /* Offset of each field from kaddr */ static inline int insn_offset_rex_prefix(struct insn *insn) { return insn->prefixes.nbytes; } static inline int insn_offset_vex_prefix(struct insn *insn) { return insn_offset_rex_prefix(insn) + insn->rex_prefix.nbytes; } static inline int insn_offset_opcode(struct insn *insn) { return insn_offset_vex_prefix(insn) + insn->vex_prefix.nbytes; } static inline int insn_offset_modrm(struct insn *insn) { return insn_offset_opcode(insn) + insn->opcode.nbytes; } static inline int insn_offset_sib(struct insn *insn) { return insn_offset_modrm(insn) + insn->modrm.nbytes; } static inline int insn_offset_displacement(struct insn *insn) { return insn_offset_sib(insn) + insn->sib.nbytes; } static inline int insn_offset_immediate(struct insn *insn) { return insn_offset_displacement(insn) + insn->displacement.nbytes; } #endif /* _ASM_X86_INSN_H */ kpatch-0.5.0/kpatch-build/insn/inat-tables.c000066400000000000000000001172231321664017000206610ustar00rootroot00000000000000/* x86 opcode map generated from x86-opcode-map.txt */ /* Do not change this code. */ /* Table: one byte opcode */ const insn_attr_t inat_primary_table[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MODRM, [0x01] = INAT_MODRM, [0x02] = INAT_MODRM, [0x03] = INAT_MODRM, [0x04] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x05] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x08] = INAT_MODRM, [0x09] = INAT_MODRM, [0x0a] = INAT_MODRM, [0x0b] = INAT_MODRM, [0x0c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x0d] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x0f] = INAT_MAKE_ESCAPE(1), [0x10] = INAT_MODRM, [0x11] = INAT_MODRM, [0x12] = INAT_MODRM, [0x13] = INAT_MODRM, [0x14] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x15] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x18] = INAT_MODRM, [0x19] = INAT_MODRM, [0x1a] = INAT_MODRM, [0x1b] = INAT_MODRM, [0x1c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x1d] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x20] = INAT_MODRM, [0x21] = INAT_MODRM, [0x22] = INAT_MODRM, [0x23] = INAT_MODRM, [0x24] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x25] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x26] = INAT_MAKE_PREFIX(INAT_PFX_ES), [0x28] = INAT_MODRM, [0x29] = INAT_MODRM, [0x2a] = INAT_MODRM, [0x2b] = INAT_MODRM, [0x2c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x2d] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x2e] = INAT_MAKE_PREFIX(INAT_PFX_CS), [0x30] = INAT_MODRM, [0x31] = INAT_MODRM, [0x32] = INAT_MODRM, [0x33] = INAT_MODRM, [0x34] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x35] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x36] = INAT_MAKE_PREFIX(INAT_PFX_SS), [0x38] = INAT_MODRM, [0x39] = INAT_MODRM, [0x3a] = INAT_MODRM, [0x3b] = INAT_MODRM, [0x3c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x3d] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x3e] = INAT_MAKE_PREFIX(INAT_PFX_DS), [0x40] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x41] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x42] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x43] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x44] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x45] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x46] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x47] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x48] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x49] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4a] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4b] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4c] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4d] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4e] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4f] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x50] = INAT_FORCE64, [0x51] = INAT_FORCE64, [0x52] = INAT_FORCE64, [0x53] = INAT_FORCE64, [0x54] = INAT_FORCE64, [0x55] = INAT_FORCE64, [0x56] = INAT_FORCE64, [0x57] = INAT_FORCE64, [0x58] = INAT_FORCE64, [0x59] = INAT_FORCE64, [0x5a] = INAT_FORCE64, [0x5b] = INAT_FORCE64, [0x5c] = INAT_FORCE64, [0x5d] = INAT_FORCE64, [0x5e] = INAT_FORCE64, [0x5f] = INAT_FORCE64, [0x62] = INAT_MODRM, [0x63] = INAT_MODRM | INAT_MODRM, [0x64] = INAT_MAKE_PREFIX(INAT_PFX_FS), [0x65] = INAT_MAKE_PREFIX(INAT_PFX_GS), [0x66] = INAT_MAKE_PREFIX(INAT_PFX_OPNDSZ), [0x67] = INAT_MAKE_PREFIX(INAT_PFX_ADDRSZ), [0x68] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x69] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM, [0x6a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0x6b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x71] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x72] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x73] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x74] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x75] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x76] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x77] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x78] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x79] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7a] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7b] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7d] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7e] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7f] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x80] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1), [0x81] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM | INAT_MAKE_GROUP(1), [0x82] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1), [0x83] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1), [0x84] = INAT_MODRM, [0x85] = INAT_MODRM, [0x86] = INAT_MODRM, [0x87] = INAT_MODRM, [0x88] = INAT_MODRM, [0x89] = INAT_MODRM, [0x8a] = INAT_MODRM, [0x8b] = INAT_MODRM, [0x8c] = INAT_MODRM, [0x8d] = INAT_MODRM, [0x8e] = INAT_MODRM, [0x8f] = INAT_MAKE_GROUP(2) | INAT_MODRM | INAT_FORCE64, [0x9a] = INAT_MAKE_IMM(INAT_IMM_PTR), [0x9c] = INAT_FORCE64, [0x9d] = INAT_FORCE64, [0xa0] = INAT_MOFFSET, [0xa1] = INAT_MOFFSET, [0xa2] = INAT_MOFFSET, [0xa3] = INAT_MOFFSET, [0xa8] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xa9] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0xb0] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb1] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb2] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb3] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb4] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb5] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb6] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb7] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb8] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xb9] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xba] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbb] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbc] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbd] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbe] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbf] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xc0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(3), [0xc1] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(3), [0xc2] = INAT_MAKE_IMM(INAT_IMM_WORD) | INAT_FORCE64, [0xc4] = INAT_MODRM | INAT_MAKE_PREFIX(INAT_PFX_VEX3), [0xc5] = INAT_MODRM | INAT_MAKE_PREFIX(INAT_PFX_VEX2), [0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(4), [0xc7] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM | INAT_MAKE_GROUP(5), [0xc8] = INAT_MAKE_IMM(INAT_IMM_WORD) | INAT_SCNDIMM, [0xc9] = INAT_FORCE64, [0xca] = INAT_MAKE_IMM(INAT_IMM_WORD), [0xcd] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xd0] = INAT_MODRM | INAT_MAKE_GROUP(3), [0xd1] = INAT_MODRM | INAT_MAKE_GROUP(3), [0xd2] = INAT_MODRM | INAT_MAKE_GROUP(3), [0xd3] = INAT_MODRM | INAT_MAKE_GROUP(3), [0xd4] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xd5] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xd8] = INAT_MODRM, [0xd9] = INAT_MODRM, [0xda] = INAT_MODRM, [0xdb] = INAT_MODRM, [0xdc] = INAT_MODRM, [0xdd] = INAT_MODRM, [0xde] = INAT_MODRM, [0xdf] = INAT_MODRM, [0xe0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xe1] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xe2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xe3] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xe4] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xe5] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xe6] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xe7] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xe8] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0xe9] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0xea] = INAT_MAKE_IMM(INAT_IMM_PTR), [0xeb] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xf0] = INAT_MAKE_PREFIX(INAT_PFX_LOCK), [0xf2] = INAT_MAKE_PREFIX(INAT_PFX_REPNE) | INAT_MAKE_PREFIX(INAT_PFX_REPNE), [0xf3] = INAT_MAKE_PREFIX(INAT_PFX_REPE) | INAT_MAKE_PREFIX(INAT_PFX_REPE), [0xf6] = INAT_MODRM | INAT_MAKE_GROUP(6), [0xf7] = INAT_MODRM | INAT_MAKE_GROUP(7), [0xfe] = INAT_MAKE_GROUP(8), [0xff] = INAT_MAKE_GROUP(9), }; /* Table: 2-byte opcode (0x0f) */ const insn_attr_t inat_escape_table_1[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MAKE_GROUP(10), [0x01] = INAT_MAKE_GROUP(11), [0x02] = INAT_MODRM, [0x03] = INAT_MODRM, [0x0d] = INAT_MAKE_GROUP(12), [0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0x10] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x11] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x12] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x13] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x14] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x15] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x16] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x17] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x18] = INAT_MAKE_GROUP(13), [0x1a] = INAT_MODRM | INAT_MODRM | INAT_MODRM | INAT_MODRM, [0x1b] = INAT_MODRM | INAT_MODRM | INAT_MODRM | INAT_MODRM, [0x1f] = INAT_MODRM, [0x20] = INAT_MODRM, [0x21] = INAT_MODRM, [0x22] = INAT_MODRM, [0x23] = INAT_MODRM, [0x28] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x29] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x2a] = INAT_MODRM | INAT_VARIANT, [0x2b] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x2c] = INAT_MODRM | INAT_VARIANT, [0x2d] = INAT_MODRM | INAT_VARIANT, [0x2e] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x2f] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x38] = INAT_MAKE_ESCAPE(2), [0x3a] = INAT_MAKE_ESCAPE(3), [0x40] = INAT_MODRM, [0x41] = INAT_MODRM, [0x42] = INAT_MODRM, [0x43] = INAT_MODRM, [0x44] = INAT_MODRM, [0x45] = INAT_MODRM, [0x46] = INAT_MODRM, [0x47] = INAT_MODRM, [0x48] = INAT_MODRM, [0x49] = INAT_MODRM, [0x4a] = INAT_MODRM, [0x4b] = INAT_MODRM, [0x4c] = INAT_MODRM, [0x4d] = INAT_MODRM, [0x4e] = INAT_MODRM, [0x4f] = INAT_MODRM, [0x50] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x51] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x52] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x53] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x54] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x55] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x56] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x57] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x58] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x59] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5a] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5b] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5c] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5d] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5e] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5f] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x60] = INAT_MODRM | INAT_VARIANT, [0x61] = INAT_MODRM | INAT_VARIANT, [0x62] = INAT_MODRM | INAT_VARIANT, [0x63] = INAT_MODRM | INAT_VARIANT, [0x64] = INAT_MODRM | INAT_VARIANT, [0x65] = INAT_MODRM | INAT_VARIANT, [0x66] = INAT_MODRM | INAT_VARIANT, [0x67] = INAT_MODRM | INAT_VARIANT, [0x68] = INAT_MODRM | INAT_VARIANT, [0x69] = INAT_MODRM | INAT_VARIANT, [0x6a] = INAT_MODRM | INAT_VARIANT, [0x6b] = INAT_MODRM | INAT_VARIANT, [0x6c] = INAT_VARIANT, [0x6d] = INAT_VARIANT, [0x6e] = INAT_MODRM | INAT_VARIANT, [0x6f] = INAT_MODRM | INAT_VARIANT, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x71] = INAT_MAKE_GROUP(14), [0x72] = INAT_MAKE_GROUP(15), [0x73] = INAT_MAKE_GROUP(16), [0x74] = INAT_MODRM | INAT_VARIANT, [0x75] = INAT_MODRM | INAT_VARIANT, [0x76] = INAT_MODRM | INAT_VARIANT, [0x77] = INAT_VEXOK | INAT_VEXOK, [0x78] = INAT_MODRM, [0x79] = INAT_MODRM, [0x7c] = INAT_VARIANT, [0x7d] = INAT_VARIANT, [0x7e] = INAT_MODRM | INAT_VARIANT, [0x7f] = INAT_MODRM | INAT_VARIANT, [0x80] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x81] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x82] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x83] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x84] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x85] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x86] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x87] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x88] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x89] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8a] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8b] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8c] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8d] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8e] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8f] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x90] = INAT_MODRM, [0x91] = INAT_MODRM, [0x92] = INAT_MODRM, [0x93] = INAT_MODRM, [0x94] = INAT_MODRM, [0x95] = INAT_MODRM, [0x96] = INAT_MODRM, [0x97] = INAT_MODRM, [0x98] = INAT_MODRM, [0x99] = INAT_MODRM, [0x9a] = INAT_MODRM, [0x9b] = INAT_MODRM, [0x9c] = INAT_MODRM, [0x9d] = INAT_MODRM, [0x9e] = INAT_MODRM, [0x9f] = INAT_MODRM, [0xa0] = INAT_FORCE64, [0xa1] = INAT_FORCE64, [0xa3] = INAT_MODRM, [0xa4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0xa5] = INAT_MODRM, [0xa6] = INAT_MAKE_GROUP(17), [0xa7] = INAT_MAKE_GROUP(18), [0xa8] = INAT_FORCE64, [0xa9] = INAT_FORCE64, [0xab] = INAT_MODRM, [0xac] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0xad] = INAT_MODRM, [0xae] = INAT_MAKE_GROUP(19), [0xaf] = INAT_MODRM, [0xb0] = INAT_MODRM, [0xb1] = INAT_MODRM, [0xb2] = INAT_MODRM, [0xb3] = INAT_MODRM, [0xb4] = INAT_MODRM, [0xb5] = INAT_MODRM, [0xb6] = INAT_MODRM, [0xb7] = INAT_MODRM, [0xb8] = INAT_VARIANT, [0xb9] = INAT_MAKE_GROUP(20), [0xba] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(21), [0xbb] = INAT_MODRM, [0xbc] = INAT_MODRM | INAT_VARIANT, [0xbd] = INAT_MODRM | INAT_VARIANT, [0xbe] = INAT_MODRM, [0xbf] = INAT_MODRM, [0xc0] = INAT_MODRM, [0xc1] = INAT_MODRM, [0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0xc3] = INAT_MODRM, [0xc4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0xc5] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0xc7] = INAT_MAKE_GROUP(22), [0xd0] = INAT_VARIANT, [0xd1] = INAT_MODRM | INAT_VARIANT, [0xd2] = INAT_MODRM | INAT_VARIANT, [0xd3] = INAT_MODRM | INAT_VARIANT, [0xd4] = INAT_MODRM | INAT_VARIANT, [0xd5] = INAT_MODRM | INAT_VARIANT, [0xd6] = INAT_VARIANT, [0xd7] = INAT_MODRM | INAT_VARIANT, [0xd8] = INAT_MODRM | INAT_VARIANT, [0xd9] = INAT_MODRM | INAT_VARIANT, [0xda] = INAT_MODRM | INAT_VARIANT, [0xdb] = INAT_MODRM | INAT_VARIANT, [0xdc] = INAT_MODRM | INAT_VARIANT, [0xdd] = INAT_MODRM | INAT_VARIANT, [0xde] = INAT_MODRM | INAT_VARIANT, [0xdf] = INAT_MODRM | INAT_VARIANT, [0xe0] = INAT_MODRM | INAT_VARIANT, [0xe1] = INAT_MODRM | INAT_VARIANT, [0xe2] = INAT_MODRM | INAT_VARIANT, [0xe3] = INAT_MODRM | INAT_VARIANT, [0xe4] = INAT_MODRM | INAT_VARIANT, [0xe5] = INAT_MODRM | INAT_VARIANT, [0xe6] = INAT_VARIANT, [0xe7] = INAT_MODRM | INAT_VARIANT, [0xe8] = INAT_MODRM | INAT_VARIANT, [0xe9] = INAT_MODRM | INAT_VARIANT, [0xea] = INAT_MODRM | INAT_VARIANT, [0xeb] = INAT_MODRM | INAT_VARIANT, [0xec] = INAT_MODRM | INAT_VARIANT, [0xed] = INAT_MODRM | INAT_VARIANT, [0xee] = INAT_MODRM | INAT_VARIANT, [0xef] = INAT_MODRM | INAT_VARIANT, [0xf0] = INAT_VARIANT, [0xf1] = INAT_MODRM | INAT_VARIANT, [0xf2] = INAT_MODRM | INAT_VARIANT, [0xf3] = INAT_MODRM | INAT_VARIANT, [0xf4] = INAT_MODRM | INAT_VARIANT, [0xf5] = INAT_MODRM | INAT_VARIANT, [0xf6] = INAT_MODRM | INAT_VARIANT, [0xf7] = INAT_MODRM | INAT_VARIANT, [0xf8] = INAT_MODRM | INAT_VARIANT, [0xf9] = INAT_MODRM | INAT_VARIANT, [0xfa] = INAT_MODRM | INAT_VARIANT, [0xfb] = INAT_MODRM | INAT_VARIANT, [0xfc] = INAT_MODRM | INAT_VARIANT, [0xfd] = INAT_MODRM | INAT_VARIANT, [0xfe] = INAT_MODRM | INAT_VARIANT, }; const insn_attr_t inat_escape_table_1_1[INAT_OPCODE_TABLE_SIZE] = { [0x10] = INAT_MODRM | INAT_VEXOK, [0x11] = INAT_MODRM | INAT_VEXOK, [0x12] = INAT_MODRM | INAT_VEXOK, [0x13] = INAT_MODRM | INAT_VEXOK, [0x14] = INAT_MODRM | INAT_VEXOK, [0x15] = INAT_MODRM | INAT_VEXOK, [0x16] = INAT_MODRM | INAT_VEXOK, [0x17] = INAT_MODRM | INAT_VEXOK, [0x28] = INAT_MODRM | INAT_VEXOK, [0x29] = INAT_MODRM | INAT_VEXOK, [0x2a] = INAT_MODRM, [0x2b] = INAT_MODRM | INAT_VEXOK, [0x2c] = INAT_MODRM, [0x2d] = INAT_MODRM, [0x2e] = INAT_MODRM | INAT_VEXOK, [0x2f] = INAT_MODRM | INAT_VEXOK, [0x50] = INAT_MODRM | INAT_VEXOK, [0x51] = INAT_MODRM | INAT_VEXOK, [0x54] = INAT_MODRM | INAT_VEXOK, [0x55] = INAT_MODRM | INAT_VEXOK, [0x56] = INAT_MODRM | INAT_VEXOK, [0x57] = INAT_MODRM | INAT_VEXOK, [0x58] = INAT_MODRM | INAT_VEXOK, [0x59] = INAT_MODRM | INAT_VEXOK, [0x5a] = INAT_MODRM | INAT_VEXOK, [0x5b] = INAT_MODRM | INAT_VEXOK, [0x5c] = INAT_MODRM | INAT_VEXOK, [0x5d] = INAT_MODRM | INAT_VEXOK, [0x5e] = INAT_MODRM | INAT_VEXOK, [0x5f] = INAT_MODRM | INAT_VEXOK, [0x60] = INAT_MODRM | INAT_VEXOK, [0x61] = INAT_MODRM | INAT_VEXOK, [0x62] = INAT_MODRM | INAT_VEXOK, [0x63] = INAT_MODRM | INAT_VEXOK, [0x64] = INAT_MODRM | INAT_VEXOK, [0x65] = INAT_MODRM | INAT_VEXOK, [0x66] = INAT_MODRM | INAT_VEXOK, [0x67] = INAT_MODRM | INAT_VEXOK, [0x68] = INAT_MODRM | INAT_VEXOK, [0x69] = INAT_MODRM | INAT_VEXOK, [0x6a] = INAT_MODRM | INAT_VEXOK, [0x6b] = INAT_MODRM | INAT_VEXOK, [0x6c] = INAT_MODRM | INAT_VEXOK, [0x6d] = INAT_MODRM | INAT_VEXOK, [0x6e] = INAT_MODRM | INAT_VEXOK, [0x6f] = INAT_MODRM | INAT_VEXOK, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x74] = INAT_MODRM | INAT_VEXOK, [0x75] = INAT_MODRM | INAT_VEXOK, [0x76] = INAT_MODRM | INAT_VEXOK, [0x7c] = INAT_MODRM | INAT_VEXOK, [0x7d] = INAT_MODRM | INAT_VEXOK, [0x7e] = INAT_MODRM | INAT_VEXOK, [0x7f] = INAT_MODRM | INAT_VEXOK, [0xbc] = INAT_MODRM, [0xbd] = INAT_MODRM, [0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xc4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xc5] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xd0] = INAT_MODRM | INAT_VEXOK, [0xd1] = INAT_MODRM | INAT_VEXOK, [0xd2] = INAT_MODRM | INAT_VEXOK, [0xd3] = INAT_MODRM | INAT_VEXOK, [0xd4] = INAT_MODRM | INAT_VEXOK, [0xd5] = INAT_MODRM | INAT_VEXOK, [0xd6] = INAT_MODRM | INAT_VEXOK, [0xd7] = INAT_MODRM | INAT_VEXOK, [0xd8] = INAT_MODRM | INAT_VEXOK, [0xd9] = INAT_MODRM | INAT_VEXOK, [0xda] = INAT_MODRM | INAT_VEXOK, [0xdb] = INAT_MODRM | INAT_VEXOK, [0xdc] = INAT_MODRM | INAT_VEXOK, [0xdd] = INAT_MODRM | INAT_VEXOK, [0xde] = INAT_MODRM | INAT_VEXOK, [0xdf] = INAT_MODRM | INAT_VEXOK, [0xe0] = INAT_MODRM | INAT_VEXOK, [0xe1] = INAT_MODRM | INAT_VEXOK, [0xe2] = INAT_MODRM | INAT_VEXOK, [0xe3] = INAT_MODRM | INAT_VEXOK, [0xe4] = INAT_MODRM | INAT_VEXOK, [0xe5] = INAT_MODRM | INAT_VEXOK, [0xe6] = INAT_MODRM | INAT_VEXOK, [0xe7] = INAT_MODRM | INAT_VEXOK, [0xe8] = INAT_MODRM | INAT_VEXOK, [0xe9] = INAT_MODRM | INAT_VEXOK, [0xea] = INAT_MODRM | INAT_VEXOK, [0xeb] = INAT_MODRM | INAT_VEXOK, [0xec] = INAT_MODRM | INAT_VEXOK, [0xed] = INAT_MODRM | INAT_VEXOK, [0xee] = INAT_MODRM | INAT_VEXOK, [0xef] = INAT_MODRM | INAT_VEXOK, [0xf1] = INAT_MODRM | INAT_VEXOK, [0xf2] = INAT_MODRM | INAT_VEXOK, [0xf3] = INAT_MODRM | INAT_VEXOK, [0xf4] = INAT_MODRM | INAT_VEXOK, [0xf5] = INAT_MODRM | INAT_VEXOK, [0xf6] = INAT_MODRM | INAT_VEXOK, [0xf7] = INAT_MODRM | INAT_VEXOK, [0xf8] = INAT_MODRM | INAT_VEXOK, [0xf9] = INAT_MODRM | INAT_VEXOK, [0xfa] = INAT_MODRM | INAT_VEXOK, [0xfb] = INAT_MODRM | INAT_VEXOK, [0xfc] = INAT_MODRM | INAT_VEXOK, [0xfd] = INAT_MODRM | INAT_VEXOK, [0xfe] = INAT_MODRM | INAT_VEXOK, }; const insn_attr_t inat_escape_table_1_2[INAT_OPCODE_TABLE_SIZE] = { [0x10] = INAT_MODRM | INAT_VEXOK, [0x11] = INAT_MODRM | INAT_VEXOK, [0x12] = INAT_MODRM | INAT_VEXOK, [0x16] = INAT_MODRM | INAT_VEXOK, [0x2a] = INAT_MODRM | INAT_VEXOK, [0x2c] = INAT_MODRM | INAT_VEXOK, [0x2d] = INAT_MODRM | INAT_VEXOK, [0x51] = INAT_MODRM | INAT_VEXOK, [0x52] = INAT_MODRM | INAT_VEXOK, [0x53] = INAT_MODRM | INAT_VEXOK, [0x58] = INAT_MODRM | INAT_VEXOK, [0x59] = INAT_MODRM | INAT_VEXOK, [0x5a] = INAT_MODRM | INAT_VEXOK, [0x5b] = INAT_MODRM | INAT_VEXOK, [0x5c] = INAT_MODRM | INAT_VEXOK, [0x5d] = INAT_MODRM | INAT_VEXOK, [0x5e] = INAT_MODRM | INAT_VEXOK, [0x5f] = INAT_MODRM | INAT_VEXOK, [0x6f] = INAT_MODRM | INAT_VEXOK, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x7e] = INAT_MODRM | INAT_VEXOK, [0x7f] = INAT_MODRM | INAT_VEXOK, [0xb8] = INAT_MODRM, [0xbc] = INAT_MODRM, [0xbd] = INAT_MODRM, [0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xd6] = INAT_MODRM, [0xe6] = INAT_MODRM | INAT_VEXOK, }; const insn_attr_t inat_escape_table_1_3[INAT_OPCODE_TABLE_SIZE] = { [0x10] = INAT_MODRM | INAT_VEXOK, [0x11] = INAT_MODRM | INAT_VEXOK, [0x12] = INAT_MODRM | INAT_VEXOK, [0x2a] = INAT_MODRM | INAT_VEXOK, [0x2c] = INAT_MODRM | INAT_VEXOK, [0x2d] = INAT_MODRM | INAT_VEXOK, [0x51] = INAT_MODRM | INAT_VEXOK, [0x58] = INAT_MODRM | INAT_VEXOK, [0x59] = INAT_MODRM | INAT_VEXOK, [0x5a] = INAT_MODRM | INAT_VEXOK, [0x5c] = INAT_MODRM | INAT_VEXOK, [0x5d] = INAT_MODRM | INAT_VEXOK, [0x5e] = INAT_MODRM | INAT_VEXOK, [0x5f] = INAT_MODRM | INAT_VEXOK, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x7c] = INAT_MODRM | INAT_VEXOK, [0x7d] = INAT_MODRM | INAT_VEXOK, [0xbc] = INAT_MODRM, [0xbd] = INAT_MODRM, [0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xd0] = INAT_MODRM | INAT_VEXOK, [0xd6] = INAT_MODRM, [0xe6] = INAT_MODRM | INAT_VEXOK, [0xf0] = INAT_MODRM | INAT_VEXOK, }; /* Table: 3-byte opcode 1 (0x0f 0x38) */ const insn_attr_t inat_escape_table_2[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MODRM | INAT_VARIANT, [0x01] = INAT_MODRM | INAT_VARIANT, [0x02] = INAT_MODRM | INAT_VARIANT, [0x03] = INAT_MODRM | INAT_VARIANT, [0x04] = INAT_MODRM | INAT_VARIANT, [0x05] = INAT_MODRM | INAT_VARIANT, [0x06] = INAT_MODRM | INAT_VARIANT, [0x07] = INAT_MODRM | INAT_VARIANT, [0x08] = INAT_MODRM | INAT_VARIANT, [0x09] = INAT_MODRM | INAT_VARIANT, [0x0a] = INAT_MODRM | INAT_VARIANT, [0x0b] = INAT_MODRM | INAT_VARIANT, [0x0c] = INAT_VARIANT, [0x0d] = INAT_VARIANT, [0x0e] = INAT_VARIANT, [0x0f] = INAT_VARIANT, [0x10] = INAT_VARIANT, [0x13] = INAT_VARIANT, [0x14] = INAT_VARIANT, [0x15] = INAT_VARIANT, [0x16] = INAT_VARIANT, [0x17] = INAT_VARIANT, [0x18] = INAT_VARIANT, [0x19] = INAT_VARIANT, [0x1a] = INAT_VARIANT, [0x1c] = INAT_MODRM | INAT_VARIANT, [0x1d] = INAT_MODRM | INAT_VARIANT, [0x1e] = INAT_MODRM | INAT_VARIANT, [0x20] = INAT_VARIANT, [0x21] = INAT_VARIANT, [0x22] = INAT_VARIANT, [0x23] = INAT_VARIANT, [0x24] = INAT_VARIANT, [0x25] = INAT_VARIANT, [0x28] = INAT_VARIANT, [0x29] = INAT_VARIANT, [0x2a] = INAT_VARIANT, [0x2b] = INAT_VARIANT, [0x2c] = INAT_VARIANT, [0x2d] = INAT_VARIANT, [0x2e] = INAT_VARIANT, [0x2f] = INAT_VARIANT, [0x30] = INAT_VARIANT, [0x31] = INAT_VARIANT, [0x32] = INAT_VARIANT, [0x33] = INAT_VARIANT, [0x34] = INAT_VARIANT, [0x35] = INAT_VARIANT, [0x36] = INAT_VARIANT, [0x37] = INAT_VARIANT, [0x38] = INAT_VARIANT, [0x39] = INAT_VARIANT, [0x3a] = INAT_VARIANT, [0x3b] = INAT_VARIANT, [0x3c] = INAT_VARIANT, [0x3d] = INAT_VARIANT, [0x3e] = INAT_VARIANT, [0x3f] = INAT_VARIANT, [0x40] = INAT_VARIANT, [0x41] = INAT_VARIANT, [0x45] = INAT_VARIANT, [0x46] = INAT_VARIANT, [0x47] = INAT_VARIANT, [0x58] = INAT_VARIANT, [0x59] = INAT_VARIANT, [0x5a] = INAT_VARIANT, [0x78] = INAT_VARIANT, [0x79] = INAT_VARIANT, [0x80] = INAT_VARIANT, [0x81] = INAT_VARIANT, [0x82] = INAT_VARIANT, [0x8c] = INAT_VARIANT, [0x8e] = INAT_VARIANT, [0x90] = INAT_VARIANT, [0x91] = INAT_VARIANT, [0x92] = INAT_VARIANT, [0x93] = INAT_VARIANT, [0x96] = INAT_VARIANT, [0x97] = INAT_VARIANT, [0x98] = INAT_VARIANT, [0x99] = INAT_VARIANT, [0x9a] = INAT_VARIANT, [0x9b] = INAT_VARIANT, [0x9c] = INAT_VARIANT, [0x9d] = INAT_VARIANT, [0x9e] = INAT_VARIANT, [0x9f] = INAT_VARIANT, [0xa6] = INAT_VARIANT, [0xa7] = INAT_VARIANT, [0xa8] = INAT_VARIANT, [0xa9] = INAT_VARIANT, [0xaa] = INAT_VARIANT, [0xab] = INAT_VARIANT, [0xac] = INAT_VARIANT, [0xad] = INAT_VARIANT, [0xae] = INAT_VARIANT, [0xaf] = INAT_VARIANT, [0xb6] = INAT_VARIANT, [0xb7] = INAT_VARIANT, [0xb8] = INAT_VARIANT, [0xb9] = INAT_VARIANT, [0xba] = INAT_VARIANT, [0xbb] = INAT_VARIANT, [0xbc] = INAT_VARIANT, [0xbd] = INAT_VARIANT, [0xbe] = INAT_VARIANT, [0xbf] = INAT_VARIANT, [0xdb] = INAT_VARIANT, [0xdc] = INAT_VARIANT, [0xdd] = INAT_VARIANT, [0xde] = INAT_VARIANT, [0xdf] = INAT_VARIANT, [0xf0] = INAT_MODRM | INAT_MODRM | INAT_VARIANT, [0xf1] = INAT_MODRM | INAT_MODRM | INAT_VARIANT, [0xf2] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xf3] = INAT_MAKE_GROUP(23), [0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_VARIANT, [0xf6] = INAT_VARIANT, [0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_VARIANT, }; const insn_attr_t inat_escape_table_2_1[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MODRM | INAT_VEXOK, [0x01] = INAT_MODRM | INAT_VEXOK, [0x02] = INAT_MODRM | INAT_VEXOK, [0x03] = INAT_MODRM | INAT_VEXOK, [0x04] = INAT_MODRM | INAT_VEXOK, [0x05] = INAT_MODRM | INAT_VEXOK, [0x06] = INAT_MODRM | INAT_VEXOK, [0x07] = INAT_MODRM | INAT_VEXOK, [0x08] = INAT_MODRM | INAT_VEXOK, [0x09] = INAT_MODRM | INAT_VEXOK, [0x0a] = INAT_MODRM | INAT_VEXOK, [0x0b] = INAT_MODRM | INAT_VEXOK, [0x0c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x0d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x0e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x0f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x10] = INAT_MODRM, [0x13] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x14] = INAT_MODRM, [0x15] = INAT_MODRM, [0x16] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x17] = INAT_MODRM | INAT_VEXOK, [0x18] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x19] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x1a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x1c] = INAT_MODRM | INAT_VEXOK, [0x1d] = INAT_MODRM | INAT_VEXOK, [0x1e] = INAT_MODRM | INAT_VEXOK, [0x20] = INAT_MODRM | INAT_VEXOK, [0x21] = INAT_MODRM | INAT_VEXOK, [0x22] = INAT_MODRM | INAT_VEXOK, [0x23] = INAT_MODRM | INAT_VEXOK, [0x24] = INAT_MODRM | INAT_VEXOK, [0x25] = INAT_MODRM | INAT_VEXOK, [0x28] = INAT_MODRM | INAT_VEXOK, [0x29] = INAT_MODRM | INAT_VEXOK, [0x2a] = INAT_MODRM | INAT_VEXOK, [0x2b] = INAT_MODRM | INAT_VEXOK, [0x2c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x2d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x2e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x2f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x30] = INAT_MODRM | INAT_VEXOK, [0x31] = INAT_MODRM | INAT_VEXOK, [0x32] = INAT_MODRM | INAT_VEXOK, [0x33] = INAT_MODRM | INAT_VEXOK, [0x34] = INAT_MODRM | INAT_VEXOK, [0x35] = INAT_MODRM | INAT_VEXOK, [0x36] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x37] = INAT_MODRM | INAT_VEXOK, [0x38] = INAT_MODRM | INAT_VEXOK, [0x39] = INAT_MODRM | INAT_VEXOK, [0x3a] = INAT_MODRM | INAT_VEXOK, [0x3b] = INAT_MODRM | INAT_VEXOK, [0x3c] = INAT_MODRM | INAT_VEXOK, [0x3d] = INAT_MODRM | INAT_VEXOK, [0x3e] = INAT_MODRM | INAT_VEXOK, [0x3f] = INAT_MODRM | INAT_VEXOK, [0x40] = INAT_MODRM | INAT_VEXOK, [0x41] = INAT_MODRM | INAT_VEXOK, [0x45] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x46] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x47] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x58] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x59] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x5a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x78] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x79] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x80] = INAT_MODRM, [0x81] = INAT_MODRM, [0x82] = INAT_MODRM, [0x8c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x8e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x90] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x91] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x92] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x93] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x96] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x97] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x98] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x99] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9b] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xa6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xa7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xa8] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xa9] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xaa] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xab] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xac] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xad] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xae] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xaf] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xb6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xb7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xb8] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xb9] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xba] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbb] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbc] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbd] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbe] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbf] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xdb] = INAT_MODRM | INAT_VEXOK, [0xdc] = INAT_MODRM | INAT_VEXOK, [0xdd] = INAT_MODRM | INAT_VEXOK, [0xde] = INAT_MODRM | INAT_VEXOK, [0xdf] = INAT_MODRM | INAT_VEXOK, [0xf0] = INAT_MODRM, [0xf1] = INAT_MODRM, [0xf6] = INAT_MODRM, [0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; const insn_attr_t inat_escape_table_2_2[INAT_OPCODE_TABLE_SIZE] = { [0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xf6] = INAT_MODRM, [0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; const insn_attr_t inat_escape_table_2_3[INAT_OPCODE_TABLE_SIZE] = { [0xf0] = INAT_MODRM | INAT_MODRM, [0xf1] = INAT_MODRM | INAT_MODRM, [0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xf6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; /* Table: 3-byte opcode 2 (0x0f 0x3a) */ const insn_attr_t inat_escape_table_3[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_VARIANT, [0x01] = INAT_VARIANT, [0x02] = INAT_VARIANT, [0x04] = INAT_VARIANT, [0x05] = INAT_VARIANT, [0x06] = INAT_VARIANT, [0x08] = INAT_VARIANT, [0x09] = INAT_VARIANT, [0x0a] = INAT_VARIANT, [0x0b] = INAT_VARIANT, [0x0c] = INAT_VARIANT, [0x0d] = INAT_VARIANT, [0x0e] = INAT_VARIANT, [0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x14] = INAT_VARIANT, [0x15] = INAT_VARIANT, [0x16] = INAT_VARIANT, [0x17] = INAT_VARIANT, [0x18] = INAT_VARIANT, [0x19] = INAT_VARIANT, [0x1d] = INAT_VARIANT, [0x20] = INAT_VARIANT, [0x21] = INAT_VARIANT, [0x22] = INAT_VARIANT, [0x38] = INAT_VARIANT, [0x39] = INAT_VARIANT, [0x40] = INAT_VARIANT, [0x41] = INAT_VARIANT, [0x42] = INAT_VARIANT, [0x44] = INAT_VARIANT, [0x46] = INAT_VARIANT, [0x4a] = INAT_VARIANT, [0x4b] = INAT_VARIANT, [0x4c] = INAT_VARIANT, [0x60] = INAT_VARIANT, [0x61] = INAT_VARIANT, [0x62] = INAT_VARIANT, [0x63] = INAT_VARIANT, [0xdf] = INAT_VARIANT, [0xf0] = INAT_VARIANT, }; const insn_attr_t inat_escape_table_3_1[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x01] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x02] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x04] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x05] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x06] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x08] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x09] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0c] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0d] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0e] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x14] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x15] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x16] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x17] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x18] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x19] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x1d] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x20] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x21] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x22] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x38] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x39] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x40] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x41] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x42] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x44] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x46] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x4a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x4b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x4c] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x60] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x61] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x62] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x63] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xdf] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, }; const insn_attr_t inat_escape_table_3_3[INAT_OPCODE_TABLE_SIZE] = { [0xf0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; /* GrpTable: Grp1 */ /* GrpTable: Grp1A */ /* GrpTable: Grp2 */ /* GrpTable: Grp3_1 */ const insn_attr_t inat_group_table_6[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM, [0x5] = INAT_MODRM, [0x6] = INAT_MODRM, [0x7] = INAT_MODRM, }; /* GrpTable: Grp3_2 */ const insn_attr_t inat_group_table_7[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM, [0x5] = INAT_MODRM, [0x6] = INAT_MODRM, [0x7] = INAT_MODRM, }; /* GrpTable: Grp4 */ const insn_attr_t inat_group_table_8[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, }; /* GrpTable: Grp5 */ const insn_attr_t inat_group_table_9[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM | INAT_FORCE64, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM | INAT_FORCE64, [0x5] = INAT_MODRM, [0x6] = INAT_MODRM | INAT_FORCE64, }; /* GrpTable: Grp6 */ const insn_attr_t inat_group_table_10[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM, [0x5] = INAT_MODRM, }; /* GrpTable: Grp7 */ const insn_attr_t inat_group_table_11[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM, [0x6] = INAT_MODRM, [0x7] = INAT_MODRM, }; /* GrpTable: Grp8 */ /* GrpTable: Grp9 */ const insn_attr_t inat_group_table_22[INAT_GROUP_TABLE_SIZE] = { [0x1] = INAT_MODRM, [0x6] = INAT_MODRM | INAT_MODRM | INAT_VARIANT, [0x7] = INAT_MODRM | INAT_MODRM | INAT_VARIANT, }; const insn_attr_t inat_group_table_22_1[INAT_GROUP_TABLE_SIZE] = { [0x6] = INAT_MODRM, }; const insn_attr_t inat_group_table_22_2[INAT_GROUP_TABLE_SIZE] = { [0x6] = INAT_MODRM, [0x7] = INAT_MODRM, }; /* GrpTable: Grp10 */ /* GrpTable: Grp11A */ const insn_attr_t inat_group_table_4[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0x7] = INAT_MAKE_IMM(INAT_IMM_BYTE), }; /* GrpTable: Grp11B */ const insn_attr_t inat_group_table_5[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM, [0x7] = INAT_MAKE_IMM(INAT_IMM_VWORD32), }; /* GrpTable: Grp12 */ const insn_attr_t inat_group_table_14[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, }; const insn_attr_t inat_group_table_14_1[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, }; /* GrpTable: Grp13 */ const insn_attr_t inat_group_table_15[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, }; const insn_attr_t inat_group_table_15_1[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, }; /* GrpTable: Grp14 */ const insn_attr_t inat_group_table_16[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x3] = INAT_VARIANT, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x7] = INAT_VARIANT, }; const insn_attr_t inat_group_table_16_1[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x3] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x7] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, }; /* GrpTable: Grp15 */ const insn_attr_t inat_group_table_19[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_VARIANT, [0x1] = INAT_VARIANT, [0x2] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x3] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, }; const insn_attr_t inat_group_table_19_2[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, }; /* GrpTable: Grp16 */ const insn_attr_t inat_group_table_13[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, }; /* GrpTable: Grp17 */ const insn_attr_t inat_group_table_23[INAT_GROUP_TABLE_SIZE] = { [0x1] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x2] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x3] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; /* GrpTable: GrpP */ /* GrpTable: GrpPDLK */ /* GrpTable: GrpRNG */ /* Escape opcode map array */ const insn_attr_t * const inat_escape_tables[INAT_ESC_MAX + 1][INAT_LSTPFX_MAX + 1] = { [1][0] = inat_escape_table_1, [1][1] = inat_escape_table_1_1, [1][2] = inat_escape_table_1_2, [1][3] = inat_escape_table_1_3, [2][0] = inat_escape_table_2, [2][1] = inat_escape_table_2_1, [2][2] = inat_escape_table_2_2, [2][3] = inat_escape_table_2_3, [3][0] = inat_escape_table_3, [3][1] = inat_escape_table_3_1, [3][3] = inat_escape_table_3_3, }; /* Group opcode map array */ const insn_attr_t * const inat_group_tables[INAT_GRP_MAX + 1][INAT_LSTPFX_MAX + 1] = { [4][0] = inat_group_table_4, [5][0] = inat_group_table_5, [6][0] = inat_group_table_6, [7][0] = inat_group_table_7, [8][0] = inat_group_table_8, [9][0] = inat_group_table_9, [10][0] = inat_group_table_10, [11][0] = inat_group_table_11, [13][0] = inat_group_table_13, [14][0] = inat_group_table_14, [14][1] = inat_group_table_14_1, [15][0] = inat_group_table_15, [15][1] = inat_group_table_15_1, [16][0] = inat_group_table_16, [16][1] = inat_group_table_16_1, [19][0] = inat_group_table_19, [19][2] = inat_group_table_19_2, [22][0] = inat_group_table_22, [22][1] = inat_group_table_22_1, [22][2] = inat_group_table_22_2, [23][0] = inat_group_table_23, }; /* AVX opcode map array */ const insn_attr_t * const inat_avx_tables[X86_VEX_M_MAX + 1][INAT_LSTPFX_MAX + 1] = { [1][0] = inat_escape_table_1, [1][1] = inat_escape_table_1_1, [1][2] = inat_escape_table_1_2, [1][3] = inat_escape_table_1_3, [2][0] = inat_escape_table_2, [2][1] = inat_escape_table_2_1, [2][2] = inat_escape_table_2_2, [2][3] = inat_escape_table_2_3, [3][0] = inat_escape_table_3, [3][1] = inat_escape_table_3_1, [3][3] = inat_escape_table_3_3, }; kpatch-0.5.0/kpatch-build/insn/inat.c000066400000000000000000000051021321664017000174010ustar00rootroot00000000000000/* * x86 instruction attribute tables * * Written by Masami Hiramatsu * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #include /* Attribute tables are generated from opcode map */ #include "inat-tables.c" /* Attribute search APIs */ insn_attr_t inat_get_opcode_attribute(insn_byte_t opcode) { return inat_primary_table[opcode]; } int inat_get_last_prefix_id(insn_byte_t last_pfx) { insn_attr_t lpfx_attr; lpfx_attr = inat_get_opcode_attribute(last_pfx); return inat_last_prefix_id(lpfx_attr); } insn_attr_t inat_get_escape_attribute(insn_byte_t opcode, int lpfx_id, insn_attr_t esc_attr) { const insn_attr_t *table; int n; n = inat_escape_id(esc_attr); table = inat_escape_tables[n][0]; if (!table) return 0; if (inat_has_variant(table[opcode]) && lpfx_id) { table = inat_escape_tables[n][lpfx_id]; if (!table) return 0; } return table[opcode]; } insn_attr_t inat_get_group_attribute(insn_byte_t modrm, int lpfx_id, insn_attr_t grp_attr) { const insn_attr_t *table; int n; n = inat_group_id(grp_attr); table = inat_group_tables[n][0]; if (!table) return inat_group_common_attribute(grp_attr); if (inat_has_variant(table[X86_MODRM_REG(modrm)]) && lpfx_id) { table = inat_group_tables[n][lpfx_id]; if (!table) return inat_group_common_attribute(grp_attr); } return table[X86_MODRM_REG(modrm)] | inat_group_common_attribute(grp_attr); } insn_attr_t inat_get_avx_attribute(insn_byte_t opcode, insn_byte_t vex_m, insn_byte_t vex_p) { const insn_attr_t *table; if (vex_m > X86_VEX_M_MAX || vex_p > INAT_LSTPFX_MAX) return 0; /* At first, this checks the master table */ table = inat_avx_tables[vex_m][0]; if (!table) return 0; if (!inat_is_group(table[opcode]) && vex_p) { /* If this is not a group, get attribute directly */ table = inat_avx_tables[vex_m][vex_p]; if (!table) return 0; } return table[opcode]; } kpatch-0.5.0/kpatch-build/insn/insn.c000066400000000000000000000345471321664017000174340ustar00rootroot00000000000000/* * x86 instruction analysis * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * * Copyright (C) IBM Corporation, 2002, 2004, 2009 */ #ifdef __KERNEL__ #include #else #include #endif #include #include #define unlikely(a) a /* Verify next sizeof(t) bytes can be on the same instruction */ #define validate_next(t, insn, n) \ ((insn)->next_byte + sizeof(t) + n - (insn)->kaddr <= MAX_INSN_SIZE) #define __get_next(t, insn) \ ({ t r = *(t*)insn->next_byte; insn->next_byte += sizeof(t); r; }) #define __peek_nbyte_next(t, insn, n) \ ({ t r = *(t*)((insn)->next_byte + n); r; }) #define get_next(t, insn) \ ({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); }) #define peek_nbyte_next(t, insn, n) \ ({ if (unlikely(!validate_next(t, insn, n))) goto err_out; __peek_nbyte_next(t, insn, n); }) #define peek_next(t, insn) peek_nbyte_next(t, insn, 0) /** * insn_init() - initialize struct insn * @insn: &struct insn to be initialized * @kaddr: address (in kernel memory) of instruction (or copy thereof) * @x86_64: !0 for 64-bit kernel or 64-bit app */ void insn_init(struct insn *insn, const void *kaddr, int x86_64) { memset(insn, 0, sizeof(*insn)); insn->kaddr = kaddr; insn->next_byte = kaddr; insn->x86_64 = x86_64 ? 1 : 0; insn->opnd_bytes = 4; if (x86_64) insn->addr_bytes = 8; else insn->addr_bytes = 4; } /** * insn_get_prefixes - scan x86 instruction prefix bytes * @insn: &struct insn containing instruction * * Populates the @insn->prefixes bitmap, and updates @insn->next_byte * to point to the (first) opcode. No effect if @insn->prefixes.got * is already set. */ void insn_get_prefixes(struct insn *insn) { struct insn_field *prefixes = &insn->prefixes; insn_attr_t attr; insn_byte_t b, lb; int i, nb; if (prefixes->got) return; nb = 0; lb = 0; b = peek_next(insn_byte_t, insn); attr = inat_get_opcode_attribute(b); while (inat_is_legacy_prefix(attr)) { /* Skip if same prefix */ for (i = 0; i < nb; i++) if (prefixes->bytes[i] == b) goto found; if (nb == 4) /* Invalid instruction */ break; prefixes->bytes[nb++] = b; if (inat_is_address_size_prefix(attr)) { /* address size switches 2/4 or 4/8 */ if (insn->x86_64) insn->addr_bytes ^= 12; else insn->addr_bytes ^= 6; } else if (inat_is_operand_size_prefix(attr)) { /* oprand size switches 2/4 */ insn->opnd_bytes ^= 6; } found: prefixes->nbytes++; insn->next_byte++; lb = b; b = peek_next(insn_byte_t, insn); attr = inat_get_opcode_attribute(b); } /* Set the last prefix */ if (lb && lb != insn->prefixes.bytes[3]) { if (unlikely(insn->prefixes.bytes[3])) { /* Swap the last prefix */ b = insn->prefixes.bytes[3]; for (i = 0; i < nb; i++) if (prefixes->bytes[i] == lb) prefixes->bytes[i] = b; } insn->prefixes.bytes[3] = lb; } /* Decode REX prefix */ if (insn->x86_64) { b = peek_next(insn_byte_t, insn); attr = inat_get_opcode_attribute(b); if (inat_is_rex_prefix(attr)) { insn->rex_prefix.value = b; insn->rex_prefix.nbytes = 1; insn->next_byte++; if (X86_REX_W(b)) /* REX.W overrides opnd_size */ insn->opnd_bytes = 8; } } insn->rex_prefix.got = 1; /* Decode VEX prefix */ b = peek_next(insn_byte_t, insn); attr = inat_get_opcode_attribute(b); if (inat_is_vex_prefix(attr)) { insn_byte_t b2 = peek_nbyte_next(insn_byte_t, insn, 1); if (!insn->x86_64) { /* * In 32-bits mode, if the [7:6] bits (mod bits of * ModRM) on the second byte are not 11b, it is * LDS or LES. */ if (X86_MODRM_MOD(b2) != 3) goto vex_end; } insn->vex_prefix.bytes[0] = b; insn->vex_prefix.bytes[1] = b2; if (inat_is_vex3_prefix(attr)) { b2 = peek_nbyte_next(insn_byte_t, insn, 2); insn->vex_prefix.bytes[2] = b2; insn->vex_prefix.nbytes = 3; insn->next_byte += 3; if (insn->x86_64 && X86_VEX_W(b2)) /* VEX.W overrides opnd_size */ insn->opnd_bytes = 8; } else { insn->vex_prefix.nbytes = 2; insn->next_byte += 2; } } vex_end: insn->vex_prefix.got = 1; prefixes->got = 1; err_out: return; } /** * insn_get_opcode - collect opcode(s) * @insn: &struct insn containing instruction * * Populates @insn->opcode, updates @insn->next_byte to point past the * opcode byte(s), and set @insn->attr (except for groups). * If necessary, first collects any preceding (prefix) bytes. * Sets @insn->opcode.value = opcode1. No effect if @insn->opcode.got * is already 1. */ void insn_get_opcode(struct insn *insn) { struct insn_field *opcode = &insn->opcode; insn_byte_t op; int pfx_id; if (opcode->got) return; if (!insn->prefixes.got) insn_get_prefixes(insn); /* Get first opcode */ op = get_next(insn_byte_t, insn); opcode->bytes[0] = op; opcode->nbytes = 1; /* Check if there is VEX prefix or not */ if (insn_is_avx(insn)) { insn_byte_t m, p; m = insn_vex_m_bits(insn); p = insn_vex_p_bits(insn); insn->attr = inat_get_avx_attribute(op, m, p); if (!inat_accept_vex(insn->attr) && !inat_is_group(insn->attr)) insn->attr = 0; /* This instruction is bad */ goto end; /* VEX has only 1 byte for opcode */ } insn->attr = inat_get_opcode_attribute(op); while (inat_is_escape(insn->attr)) { /* Get escaped opcode */ op = get_next(insn_byte_t, insn); opcode->bytes[opcode->nbytes++] = op; pfx_id = insn_last_prefix_id(insn); insn->attr = inat_get_escape_attribute(op, pfx_id, insn->attr); } if (inat_must_vex(insn->attr)) insn->attr = 0; /* This instruction is bad */ end: opcode->got = 1; err_out: return; } /** * insn_get_modrm - collect ModRM byte, if any * @insn: &struct insn containing instruction * * Populates @insn->modrm and updates @insn->next_byte to point past the * ModRM byte, if any. If necessary, first collects the preceding bytes * (prefixes and opcode(s)). No effect if @insn->modrm.got is already 1. */ void insn_get_modrm(struct insn *insn) { struct insn_field *modrm = &insn->modrm; insn_byte_t pfx_id, mod; if (modrm->got) return; if (!insn->opcode.got) insn_get_opcode(insn); if (inat_has_modrm(insn->attr)) { mod = get_next(insn_byte_t, insn); modrm->value = mod; modrm->nbytes = 1; if (inat_is_group(insn->attr)) { pfx_id = insn_last_prefix_id(insn); insn->attr = inat_get_group_attribute(mod, pfx_id, insn->attr); if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) insn->attr = 0; /* This is bad */ } } if (insn->x86_64 && inat_is_force64(insn->attr)) insn->opnd_bytes = 8; modrm->got = 1; err_out: return; } /** * insn_rip_relative() - Does instruction use RIP-relative addressing mode? * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * ModRM byte. No effect if @insn->x86_64 is 0. */ int insn_rip_relative(struct insn *insn) { struct insn_field *modrm = &insn->modrm; if (!insn->x86_64) return 0; if (!modrm->got) insn_get_modrm(insn); /* * For rip-relative instructions, the mod field (top 2 bits) * is zero and the r/m field (bottom 3 bits) is 0x5. */ return (modrm->nbytes && (modrm->value & 0xc7) == 0x5); } /** * insn_get_sib() - Get the SIB byte of instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * ModRM byte. */ void insn_get_sib(struct insn *insn) { insn_byte_t modrm; if (insn->sib.got) return; if (!insn->modrm.got) insn_get_modrm(insn); if (insn->modrm.nbytes) { modrm = (insn_byte_t)insn->modrm.value; if (insn->addr_bytes != 2 && X86_MODRM_MOD(modrm) != 3 && X86_MODRM_RM(modrm) == 4) { insn->sib.value = get_next(insn_byte_t, insn); insn->sib.nbytes = 1; } } insn->sib.got = 1; err_out: return; } /** * insn_get_displacement() - Get the displacement of instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * SIB byte. * Displacement value is sign-expanded. */ void insn_get_displacement(struct insn *insn) { insn_byte_t mod, rm, base; if (insn->displacement.got) return; if (!insn->sib.got) insn_get_sib(insn); if (insn->modrm.nbytes) { /* * Interpreting the modrm byte: * mod = 00 - no displacement fields (exceptions below) * mod = 01 - 1-byte displacement field * mod = 10 - displacement field is 4 bytes, or 2 bytes if * address size = 2 (0x67 prefix in 32-bit mode) * mod = 11 - no memory operand * * If address size = 2... * mod = 00, r/m = 110 - displacement field is 2 bytes * * If address size != 2... * mod != 11, r/m = 100 - SIB byte exists * mod = 00, SIB base = 101 - displacement field is 4 bytes * mod = 00, r/m = 101 - rip-relative addressing, displacement * field is 4 bytes */ mod = X86_MODRM_MOD(insn->modrm.value); rm = X86_MODRM_RM(insn->modrm.value); base = X86_SIB_BASE(insn->sib.value); if (mod == 3) goto out; if (mod == 1) { insn->displacement.value = get_next(char, insn); insn->displacement.nbytes = 1; } else if (insn->addr_bytes == 2) { if ((mod == 0 && rm == 6) || mod == 2) { insn->displacement.value = get_next(short, insn); insn->displacement.nbytes = 2; } } else { if ((mod == 0 && rm == 5) || mod == 2 || (mod == 0 && base == 5)) { insn->displacement.value = get_next(int, insn); insn->displacement.nbytes = 4; } } } out: insn->displacement.got = 1; err_out: return; } /* Decode moffset16/32/64. Return 0 if failed */ static int __get_moffset(struct insn *insn) { switch (insn->addr_bytes) { case 2: insn->moffset1.value = get_next(short, insn); insn->moffset1.nbytes = 2; break; case 4: insn->moffset1.value = get_next(int, insn); insn->moffset1.nbytes = 4; break; case 8: insn->moffset1.value = get_next(int, insn); insn->moffset1.nbytes = 4; insn->moffset2.value = get_next(int, insn); insn->moffset2.nbytes = 4; break; default: /* opnd_bytes must be modified manually */ goto err_out; } insn->moffset1.got = insn->moffset2.got = 1; return 1; err_out: return 0; } /* Decode imm v32(Iz). Return 0 if failed */ static int __get_immv32(struct insn *insn) { switch (insn->opnd_bytes) { case 2: insn->immediate.value = get_next(short, insn); insn->immediate.nbytes = 2; break; case 4: case 8: insn->immediate.value = get_next(int, insn); insn->immediate.nbytes = 4; break; default: /* opnd_bytes must be modified manually */ goto err_out; } return 1; err_out: return 0; } /* Decode imm v64(Iv/Ov), Return 0 if failed */ static int __get_immv(struct insn *insn) { switch (insn->opnd_bytes) { case 2: insn->immediate1.value = get_next(short, insn); insn->immediate1.nbytes = 2; break; case 4: insn->immediate1.value = get_next(int, insn); insn->immediate1.nbytes = 4; break; case 8: insn->immediate1.value = get_next(int, insn); insn->immediate1.nbytes = 4; insn->immediate2.value = get_next(int, insn); insn->immediate2.nbytes = 4; break; default: /* opnd_bytes must be modified manually */ goto err_out; } insn->immediate1.got = insn->immediate2.got = 1; return 1; err_out: return 0; } /* Decode ptr16:16/32(Ap) */ static int __get_immptr(struct insn *insn) { switch (insn->opnd_bytes) { case 2: insn->immediate1.value = get_next(short, insn); insn->immediate1.nbytes = 2; break; case 4: insn->immediate1.value = get_next(int, insn); insn->immediate1.nbytes = 4; break; case 8: /* ptr16:64 is not exist (no segment) */ return 0; default: /* opnd_bytes must be modified manually */ goto err_out; } insn->immediate2.value = get_next(unsigned short, insn); insn->immediate2.nbytes = 2; insn->immediate1.got = insn->immediate2.got = 1; return 1; err_out: return 0; } /** * insn_get_immediate() - Get the immediates of instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * displacement bytes. * Basically, most of immediates are sign-expanded. Unsigned-value can be * get by bit masking with ((1 << (nbytes * 8)) - 1) */ void insn_get_immediate(struct insn *insn) { if (insn->immediate.got) return; if (!insn->displacement.got) insn_get_displacement(insn); if (inat_has_moffset(insn->attr)) { if (!__get_moffset(insn)) goto err_out; goto done; } if (!inat_has_immediate(insn->attr)) /* no immediates */ goto done; switch (inat_immediate_size(insn->attr)) { case INAT_IMM_BYTE: insn->immediate.value = get_next(char, insn); insn->immediate.nbytes = 1; break; case INAT_IMM_WORD: insn->immediate.value = get_next(short, insn); insn->immediate.nbytes = 2; break; case INAT_IMM_DWORD: insn->immediate.value = get_next(int, insn); insn->immediate.nbytes = 4; break; case INAT_IMM_QWORD: insn->immediate1.value = get_next(int, insn); insn->immediate1.nbytes = 4; insn->immediate2.value = get_next(int, insn); insn->immediate2.nbytes = 4; break; case INAT_IMM_PTR: if (!__get_immptr(insn)) goto err_out; break; case INAT_IMM_VWORD32: if (!__get_immv32(insn)) goto err_out; break; case INAT_IMM_VWORD: if (!__get_immv(insn)) goto err_out; break; default: /* Here, insn must have an immediate, but failed */ goto err_out; } if (inat_has_second_immediate(insn->attr)) { insn->immediate2.value = get_next(char, insn); insn->immediate2.nbytes = 1; } done: insn->immediate.got = 1; err_out: return; } /** * insn_get_length() - Get the length of instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * immediates bytes. */ void insn_get_length(struct insn *insn) { if (insn->length) return; if (!insn->immediate.got) insn_get_immediate(insn); insn->length = (unsigned char)((unsigned long)insn->next_byte - (unsigned long)insn->kaddr); } kpatch-0.5.0/kpatch-build/kpatch-build000077500000000000000000000633441321664017000176440ustar00rootroot00000000000000#!/bin/bash # # kpatch build script # # Copyright (C) 2014 Seth Jennings # Copyright (C) 2013,2014 Josh Poimboeuf # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, # 02110-1301, USA. # This script takes a patch based on the version of the kernel # currently running and creates a kernel module that will # replace modified functions in the kernel such that the # patched code takes effect. # This script: # - Either uses a specified kernel source directory or downloads the kernel # source package for the currently running kernel # - Unpacks and prepares the source package for building if necessary # - Builds the base kernel (vmlinux) # - Builds the patched kernel and monitors changed objects # - Builds the patched objects with gcc flags -f[function|data]-sections # - Runs kpatch tools to create and link the patch kernel module set -o pipefail BASE="$PWD" SCRIPTDIR="$(readlink -f "$(dirname "$(type -p "$0")")")" ARCH="$(uname -m)" CPUS="$(getconf _NPROCESSORS_ONLN)" CACHEDIR="${CACHEDIR:-$HOME/.kpatch}" SRCDIR="$CACHEDIR/src" RPMTOPDIR="$CACHEDIR/buildroot" VERSIONFILE="$CACHEDIR/version" TEMPDIR="$CACHEDIR/tmp" LOGFILE="$CACHEDIR/build.log" DEBUG=0 SKIPCLEANUP=0 SKIPGCCCHECK=0 ARCH_KCFLAGS="" declare -a PATCH_LIST APPLIED_PATCHES=0 warn() { echo "ERROR: $1" >&2 } die() { if [[ -z "$1" ]]; then msg="kpatch build failed" else msg="$1" fi if [[ -e "$LOGFILE" ]]; then warn "$msg. Check $LOGFILE for more details." else warn "$msg." fi exit 1 } logger() { local to_stdout=${1:-0} if [[ $DEBUG -ge 2 ]] || [[ "$to_stdout" -eq 1 ]]; then # Log to both stdout and the logfile tee -a "$LOGFILE" else # Log only to the logfile cat >> "$LOGFILE" fi } apply_patches() { local patch for patch in "${PATCH_LIST[@]}"; do patch -N -p1 --dry-run < "$patch" 2>&1 | logger || die "$patch file failed to apply" patch -N -p1 < "$patch" 2>&1 | logger || die "$patch file failed to apply" (( APPLIED_PATCHES++ )) done } remove_patches() { local patch local idx for (( ; APPLIED_PATCHES>0; APPLIED_PATCHES-- )); do idx=$(( APPLIED_PATCHES - 1)) patch="${PATCH_LIST[$idx]}" patch -p1 -R -d "$SRCDIR" < "$patch" &> /dev/null done } cleanup() { rm -f "$SRCDIR/.scmversion" remove_patches # If $SRCDIR was a git repo, make sure git actually sees that # we've reverted our patch(es). [[ -d "$SRCDIR/.git" ]] && (cd "$SRCDIR" && git update-index -q --refresh) # restore original .config and vmlinux if they were removed with mrproper [[ -e "$TEMPDIR/.config" ]] && mv -f "$TEMPDIR/.config" "$SRCDIR/" [[ -e "$TEMPDIR/vmlinux" ]] && mv -f "$TEMPDIR/vmlinux" "$SRCDIR/" [[ "$DEBUG" -eq 0 ]] && rm -rf "$TEMPDIR" rm -rf "$RPMTOPDIR" unset KCFLAGS unset KCPPFLAGS } clean_cache() { rm -rf "$CACHEDIR" mkdir -p "$TEMPDIR" || die "Couldn't create $TEMPDIR" } check_pipe_status() { rc="${PIPESTATUS[0]}" if [[ "$rc" = 139 ]]; then # There doesn't seem to be a consistent/portable way of # accessing the last executed command in bash, so just # pass in the script name for now.. warn "$1 SIGSEGV" if ls core* &> /dev/null; then cp core* /tmp die "core file at /tmp/$(ls core*)" fi die "no core file found, run 'ulimit -c unlimited' and try to recreate" fi } # $1 >= $2 version_gte() { [ "$1" = "$(echo -e "$1\n$2" | sort -rV | head -n1)" ] } is_rhel() { [[ $1 =~ \.el[78]\. ]] } find_dirs() { if [[ -e "$SCRIPTDIR/create-diff-object" ]]; then # git repo TOOLSDIR="$SCRIPTDIR" DATADIR="$(readlink -f "$SCRIPTDIR/../kmod")" PLUGINDIR="$(readlink -f "$SCRIPTDIR/gcc-plugins")" elif [[ -e "$SCRIPTDIR/../libexec/kpatch/create-diff-object" ]]; then # installation path TOOLSDIR="$(readlink -f "$SCRIPTDIR/../libexec/kpatch")" DATADIR="$(readlink -f "$SCRIPTDIR/../share/kpatch")" PLUGINDIR="$TOOLSDIR" else return 1 fi } find_core_symvers() { SYMVERSFILE="" if [[ -e "$SCRIPTDIR/create-diff-object" ]]; then # git repo SYMVERSFILE="$DATADIR/core/Module.symvers" elif [[ -e "$SCRIPTDIR/../libexec/kpatch/create-diff-object" ]]; then # installation path if [[ -e "$SCRIPTDIR/../lib/kpatch/$ARCHVERSION/Module.symvers" ]]; then SYMVERSFILE="$(readlink -f "$SCRIPTDIR/../lib/kpatch/$ARCHVERSION/Module.symvers")" elif [[ -e /lib/modules/$ARCHVERSION/extra/kpatch/Module.symvers ]]; then SYMVERSFILE="$(readlink -f "/lib/modules/$ARCHVERSION/extra/kpatch/Module.symvers")" fi fi [[ -e "$SYMVERSFILE" ]] } gcc_version_from_file() { readelf -p .comment "$1" | grep -o 'GCC:.*' } gcc_version_check() { local c="$TEMPDIR/test.c" o="$TEMPDIR/test.o" local out gccver kgccver # gcc --version varies between distributions therefore extract version # by compiling a test file and compare it to vmlinux's version. echo 'void main(void) {}' > "$c" out="$(gcc -c -pg -ffunction-sections -o "$o" "$c" 2>&1)" gccver="$(gcc_version_from_file "$o")" kgccver="$(gcc_version_from_file "$VMLINUX")" rm -f "$c" "$o" if [[ -n "$out" ]]; then warn "gcc >= 4.8 required for -pg -ffunction-settings" echo "gcc output: $out" return 1 fi # ensure gcc version matches that used to build the kernel if [[ "$gccver" != "$kgccver" ]]; then warn "gcc/kernel version mismatch" echo "gcc version: $gccver" echo "kernel version: $kgccver" echo "Install the matching gcc version (recommended) or use --skip-gcc-check" echo "to skip the version matching enforcement (not recommended)" return 1 fi return } find_special_section_data_ppc64le() { SPECIAL_VARS="$(readelf -wi "$VMLINUX" | gawk --non-decimal-data ' BEGIN { f = b = e = 0 } # Set state if name matches f == 0 && /DW_AT_name.* fixup_entry[[:space:]]*$/ {f = 1; next} b == 0 && /DW_AT_name.* bug_entry[[:space:]]*$/ {b = 1; next} e == 0 && /DW_AT_name.* exception_table_entry[[:space:]]*$/ {e = 1; next} # Reset state unless this abbrev describes the struct size f == 1 && !/DW_AT_byte_size/ { f = 0; next } b == 1 && !/DW_AT_byte_size/ { b = 0; next } e == 1 && !/DW_AT_byte_size/ { e = 0; next } # Now that we know the size, stop parsing for it f == 1 {printf("export FIXUP_STRUCT_SIZE=%d\n", $4); a = 2} b == 1 {printf("export BUG_STRUCT_SIZE=%d\n", $4); b = 2} e == 1 {printf("export EX_STRUCT_SIZE=%d\n", $4); e = 2} # Bail out once we have everything f == 2 && b == 2 && e == 2 {exit}')" [[ -n "$SPECIAL_VARS" ]] && eval "$SPECIAL_VARS" [[ -z "$FIXUP_STRUCT_SIZE" ]] && die "can't find special struct fixup_entry size" [[ -z "$BUG_STRUCT_SIZE" ]] && die "can't find special struct bug_entry size" [[ -z "$EX_STRUCT_SIZE" ]] && die "can't find special struct exception_table_entry size" return } find_special_section_data() { if [[ "$ARCH" = "ppc64le" ]]; then find_special_section_data_ppc64le return fi [[ "$CONFIG_PARAVIRT" -eq 0 ]] && AWK_OPTIONS="-vskip_p=1" SPECIAL_VARS="$(readelf -wi "$VMLINUX" | gawk --non-decimal-data $AWK_OPTIONS ' BEGIN { a = b = p = e = 0 } # Set state if name matches a == 0 && /DW_AT_name.* alt_instr[[:space:]]*$/ {a = 1; next} b == 0 && /DW_AT_name.* bug_entry[[:space:]]*$/ {b = 1; next} p == 0 && /DW_AT_name.* paravirt_patch_site[[:space:]]*$/ {p = 1; next} e == 0 && /DW_AT_name.* exception_table_entry[[:space:]]*$/ {e = 1; next} # Reset state unless this abbrev describes the struct size a == 1 && !/DW_AT_byte_size/ { a = 0; next } b == 1 && !/DW_AT_byte_size/ { b = 0; next } p == 1 && !/DW_AT_byte_size/ { p = 0; next } e == 1 && !/DW_AT_byte_size/ { e = 0; next } # Now that we know the size, stop parsing for it a == 1 {printf("export ALT_STRUCT_SIZE=%d\n", $4); a = 2} b == 1 {printf("export BUG_STRUCT_SIZE=%d\n", $4); b = 2} p == 1 {printf("export PARA_STRUCT_SIZE=%d\n", $4); p = 2} e == 1 {printf("export EX_STRUCT_SIZE=%d\n", $4); e = 2} # Bail out once we have everything a == 2 && b == 2 && (p == 2 || skip_p) && e == 2 {exit}')" [[ -n "$SPECIAL_VARS" ]] && eval "$SPECIAL_VARS" [[ -z "$ALT_STRUCT_SIZE" ]] && die "can't find special struct alt_instr size" [[ -z "$BUG_STRUCT_SIZE" ]] && die "can't find special struct bug_entry size" [[ -z "$EX_STRUCT_SIZE" ]] && die "can't find special struct paravirt_patch_site size" [[ -z "$PARA_STRUCT_SIZE" && "$CONFIG_PARAVIRT" -ne 0 ]] && die "can't find special struct paravirt_patch_site size" return } find_parent_obj() { dir="$(dirname "$1")" absdir="$(readlink -f "$dir")" pwddir="$(readlink -f .)" pdir="${absdir#$pwddir/}" file="$(basename "$1")" grepname="${1%.o}" grepname="$grepname\.o" if [[ "$DEEP_FIND" -eq 1 ]]; then num=0 if [[ -n "$last_deep_find" ]]; then parent="$(grep -l "$grepname" "$last_deep_find"/.*.cmd | grep -Fv "$pdir/.${file}.cmd" | head -n1)" num="$(grep -l "$grepname" "$last_deep_find"/.*.cmd | grep -Fvc "$pdir/.${file}.cmd")" fi if [[ "$num" -eq 0 ]]; then parent="$(find ./* -name ".*.cmd" -print0 | xargs -0 grep -l "$grepname" | grep -Fv "$pdir/.${file}.cmd" | cut -c3- | head -n1)" num="$(find ./* -name ".*.cmd" -print0 | xargs -0 grep -l "$grepname" | grep -Fvc "$pdir/.${file}.cmd")" [[ "$num" -eq 1 ]] && last_deep_find="$(dirname "$parent")" fi else parent="$(grep -l "$grepname" "$dir"/.*.cmd | grep -Fv "$dir/.${file}.cmd" | head -n1)" num="$(grep -l "$grepname" "$dir"/.*.cmd | grep -Fvc "$dir/.${file}.cmd")" fi [[ "$num" -eq 0 ]] && PARENT="" && return [[ "$num" -gt 1 ]] && ERROR_IF_DIFF="two parent matches for $1" dir="$(dirname "$parent")" PARENT="$(basename "$parent")" PARENT="${PARENT#.}" PARENT="${PARENT%.cmd}" PARENT="$dir/$PARENT" [[ ! -e "$PARENT" ]] && die "ERROR: can't find parent $PARENT for $1" } find_kobj() { arg="$1" KOBJFILE="$arg" DEEP_FIND=0 ERROR_IF_DIFF= while true; do find_parent_obj "$KOBJFILE" [[ -n "$PARENT" ]] && DEEP_FIND=0 if [[ -z "$PARENT" ]]; then [[ "$KOBJFILE" = *.ko ]] && return case "$KOBJFILE" in */built-in.o|\ arch/x86/lib/lib.a|\ arch/x86/kernel/head*.o|\ arch/x86/kernel/ebda.o|\ arch/x86/kernel/platform-quirks.o|\ lib/lib.a) KOBJFILE=vmlinux return esac if [[ "$DEEP_FIND" -eq 0 ]]; then DEEP_FIND=1 continue; fi die "invalid ancestor $KOBJFILE for $arg" fi KOBJFILE="$PARENT" done } # Only allow alphanumerics and '_' and '-' in the module name. Everything else # is replaced with '-'. Also truncate to 48 chars so the full name fits in the # kernel's 56-byte module name array. module_name_string() { echo "${1//[^a-zA-Z0-9_-]/-}" | cut -c 1-48 } usage() { echo "usage: $(basename "$0") [options] " >&2 echo " patchN Input patchfile(s)" >&2 echo " -h, --help Show this help message" >&2 echo " -a, --archversion Specify the kernel arch version" >&2 echo " -r, --sourcerpm Specify kernel source RPM" >&2 echo " -s, --sourcedir Specify kernel source directory" >&2 echo " -c, --config Specify kernel config file" >&2 echo " -v, --vmlinux Specify original vmlinux" >&2 echo " -j, --jobs Specify the number of make jobs" >&2 echo " -t, --target Specify custom kernel build targets" >&2 echo " -n, --name Specify the name of the kpatch module" >&2 echo " -o, --output Specify output folder" >&2 echo " -d, --debug Enable 'xtrace' and keep scratch files" >&2 echo " in /tmp" >&2 echo " (can be specified multiple times)" >&2 echo " --skip-cleanup Skip post-build cleanup" >&2 echo " --skip-gcc-check Skip gcc version matching check" >&2 echo " (not recommended)" >&2 } options="$(getopt -o ha:r:s:c:v:j:t:n:o:d -l "help,archversion:,sourcerpm:,sourcedir:,config:,vmlinux:,jobs:,target:,name:,output:,debug,skip-gcc-check,skip-cleanup" -- "$@")" || die "getopt failed" eval set -- "$options" while [[ $# -gt 0 ]]; do case "$1" in -h|--help) usage exit 0 ;; -a|--archversion) ARCHVERSION="$2" shift ;; -r|--sourcerpm) [[ ! -f "$2" ]] && die "source rpm '$2' not found" SRCRPM="$(readlink -f "$2")" shift ;; -s|--sourcedir) [[ ! -d "$2" ]] && die "source dir '$2' not found" USERSRCDIR="$(readlink -f "$2")" shift ;; -c|--config) [[ ! -f "$2" ]] && die "config file '$2' not found" CONFIGFILE="$(readlink -f "$2")" shift ;; -v|--vmlinux) [[ ! -f "$2" ]] && die "vmlinux file '$2' not found" VMLINUX="$(readlink -f "$2")" shift ;; -j|--jobs) [[ ! "$2" -gt 0 ]] && die "Invalid number of make jobs '$2'" CPUS="$2" shift ;; -t|--target) TARGETS="$TARGETS $2" shift ;; -n|--name) MODNAME="$(module_name_string "$2")" shift ;; -o|--output) [[ ! -d "$2" ]] && die "output dir '$2' not found" BASE="$(readlink -f "$2")" shift ;; -d|--debug) DEBUG=$((DEBUG + 1)) if [[ $DEBUG -eq 1 ]]; then echo "DEBUG mode enabled" fi ;; --skip-cleanup) echo "Skipping cleanup" SKIPCLEANUP=1 ;; --skip-gcc-check) echo "WARNING: Skipping gcc version matching check (not recommended)" SKIPGCCCHECK=1 ;; *) [[ "$1" = "--" ]] && shift && continue [[ ! -f "$1" ]] && die "patch file '$1' not found" PATCH_LIST+=("$(readlink -f "$1")") ;; esac shift done if [[ ${#PATCH_LIST[@]} -eq 0 ]]; then warn "no patch file(s) specified" usage exit 1 fi if [[ $DEBUG -eq 1 ]] || [[ $DEBUG -ge 3 ]]; then set -o xtrace fi if [[ -n "$ARCHVERSION" ]] && [[ -n "$VMLINUX" ]]; then warn "--archversion is incompatible with --vmlinux" exit 1 fi if [[ -n "$SRCRPM" ]]; then if [[ -n "$ARCHVERSION" ]]; then warn "--archversion is incompatible with --sourcerpm" exit 1 fi rpmname="$(basename "$SRCRPM")" ARCHVERSION="${rpmname%.src.rpm}.$(uname -m)" ARCHVERSION="${ARCHVERSION#kernel-}" fi # ensure cachedir and tempdir are setup properly and cleaned mkdir -p "$TEMPDIR" || die "Couldn't create $TEMPDIR" rm -rf "${TEMPDIR:?}"/* rm -f "$LOGFILE" if [[ -n "$USERSRCDIR" ]]; then if [[ -n "$ARCHVERSION" ]]; then warn "--archversion is incompatible with --sourcedir" exit 1 fi SRCDIR="$USERSRCDIR" [[ -z "$VMLINUX" ]] && VMLINUX="$SRCDIR"/vmlinux [[ ! -e "$VMLINUX" ]] && die "can't find vmlinux" # Extract the target kernel version from vmlinux in this case. ARCHVERSION="$(strings "$VMLINUX" | grep -m 1 -e "^Linux version" | awk '{ print($3); }')" fi [[ -z "$ARCHVERSION" ]] && ARCHVERSION="$(uname -r)" [[ "$SKIPCLEANUP" -eq 0 ]] && trap cleanup EXIT INT TERM HUP KVER="${ARCHVERSION%%-*}" if [[ "$ARCHVERSION" =~ - ]]; then KREL="${ARCHVERSION##*-}" KREL="${KREL%.*}" fi [[ -z "$TARGETS" ]] && TARGETS="vmlinux modules" # Don't check external file. # shellcheck disable=SC1091 source /etc/os-release DISTRO="$ID" if [[ "$DISTRO" = fedora ]] || [[ "$DISTRO" = rhel ]] || [[ "$DISTRO" = ol ]] || [[ "$DISTRO" = centos ]]; then [[ -z "$VMLINUX" ]] && VMLINUX="/usr/lib/debug/lib/modules/$ARCHVERSION/vmlinux" [[ -e "$VMLINUX" ]] || die "kernel-debuginfo-$ARCHVERSION not installed" export PATH="/usr/lib64/ccache:$PATH" elif [[ "$DISTRO" = ubuntu ]] || [[ "$DISTRO" = debian ]]; then [[ -z "$VMLINUX" ]] && VMLINUX="/usr/lib/debug/boot/vmlinux-$ARCHVERSION" if [[ "$DISTRO" = ubuntu ]]; then [[ -e "$VMLINUX" ]] || die "linux-image-$ARCHVERSION-dbgsym not installed" elif [[ "$DISTRO" = debian ]]; then [[ -e "$VMLINUX" ]] || die "linux-image-$ARCHVERSION-dbg not installed" fi export PATH="/usr/lib/ccache:$PATH" fi find_dirs || die "can't find supporting tools" if [[ "$SKIPGCCCHECK" -eq 0 ]]; then gcc_version_check || die fi if [[ -n "$USERSRCDIR" ]]; then echo "Using source directory at $USERSRCDIR" # save vmlinux before it gets removed with mrproper [[ "$VMLINUX" -ef "$SRCDIR"/vmlinux ]] && cp -f "$VMLINUX" "$TEMPDIR/vmlinux" && VMLINUX="$TEMPDIR/vmlinux" elif [[ -e "$SRCDIR"/.config ]] && [[ -e "$VERSIONFILE" ]] && [[ "$(cat "$VERSIONFILE")" = "$ARCHVERSION" ]]; then echo "Using cache at $SRCDIR" else if [[ "$DISTRO" = fedora ]] || [[ "$DISTRO" = rhel ]] || [[ "$DISTRO" = ol ]] || [[ "$DISTRO" = centos ]]; then echo "Fedora/Red Hat distribution detected" rpm -q --quiet rpmdevtools || die "rpmdevtools not installed" clean_cache echo "Downloading kernel source for $ARCHVERSION" if [[ -z "$SRCRPM" ]]; then if [[ "$DISTRO" = fedora ]]; then wget -P "$TEMPDIR" "http://kojipkgs.fedoraproject.org/packages/kernel/$KVER/$KREL/src/kernel-$KVER-$KREL.src.rpm" 2>&1 | logger || die else rpm -q --quiet yum-utils || die "yum-utils not installed" yumdownloader --source --destdir "$TEMPDIR" "kernel-$ARCHVERSION" 2>&1 | logger || die fi SRCRPM="$TEMPDIR/kernel-$KVER-$KREL.src.rpm" fi echo "Unpacking kernel source" rpm -D "_topdir $RPMTOPDIR" -ivh "$SRCRPM" 2>&1 | logger || die rpmbuild -D "_topdir $RPMTOPDIR" -bp "--target=$(uname -m)" "$RPMTOPDIR"/SPECS/kernel.spec 2>&1 | logger || die "rpmbuild -bp failed. you may need to run 'yum-builddep kernel' first." mv "$RPMTOPDIR"/BUILD/kernel-*/linux-"${ARCHVERSION%.*}"*"${ARCHVERSION##*.}" "$SRCDIR" 2>&1 | logger || die rm -rf "$RPMTOPDIR" rm -rf "$SRCDIR/.git" if [[ "$ARCHVERSION" == *-* ]]; then echo "-${ARCHVERSION##*-}" > "$SRCDIR/localversion" || die fi echo "$ARCHVERSION" > "$VERSIONFILE" || die elif [[ "$DISTRO" = ubuntu ]] || [[ "$DISTRO" = debian ]]; then echo "Debian/Ubuntu distribution detected" if [[ "$DISTRO" = ubuntu ]]; then # url may be changed for a different mirror url="http://archive.ubuntu.com/ubuntu/pool/main/l" sublevel="SUBLEVEL = 0" UBUNTU_KERNEL=1 elif [[ "$DISTRO" = debian ]]; then # url may be changed for a different mirror url="http://ftp.debian.org/debian/pool/main/l" sublevel="SUBLEVEL =" fi pkgname="$(dpkg-query -W -f='${Source}' "linux-image-$ARCHVERSION")" pkgver="$(dpkg-query -W -f='${Version}' "linux-image-$ARCHVERSION")" dscname="${pkgname}_${pkgver}.dsc" clean_cache cd "$TEMPDIR" || die echo "Downloading and unpacking the kernel source for $ARCHVERSION" # Download source deb pkg (dget -u "$url/${pkgname}/${dscname}" 2>&1) | logger || die "dget: Could not fetch/unpack $url/${pkgname}/${dscname}" mv "${pkgname}-$KVER" "$SRCDIR" || die cp "/boot/config-${ARCHVERSION}" "$SRCDIR/.config" || die if [[ "$ARCHVERSION" == *-* ]]; then echo "-${ARCHVERSION#*-}" > "$SRCDIR/localversion" || die fi # for some reason the Ubuntu kernel versions don't follow the # upstream SUBLEVEL; they are always at SUBLEVEL 0 sed -i "s/^SUBLEVEL.*/${sublevel}/" "$SRCDIR/Makefile" || die echo "$ARCHVERSION" > "$VERSIONFILE" || die else die "Unsupported distribution" fi fi # save .config before it gets removed with mrproper [[ -z "$CONFIGFILE" ]] && CONFIGFILE="$SRCDIR"/.config [[ ! -e "$CONFIGFILE" ]] && die "can't find config file" [[ "$CONFIGFILE" -ef "$SRCDIR"/.config ]] && cp -f "$CONFIGFILE" "$TEMPDIR" && CONFIGFILE="$TEMPDIR"/.config # Build variables - Set some defaults, then adjust features # according to .config and kernel version KBUILD_EXTRA_SYMBOLS="" KPATCH_LDFLAGS="" KPATCH_MODULE=true # kernel option checking grep -q "CONFIG_DEBUG_INFO=y" "$CONFIGFILE" || die "kernel doesn't have 'CONFIG_DEBUG_INFO' enabled" if grep -q "CONFIG_LIVEPATCH=y" "$CONFIGFILE"; then # The kernel supports livepatch. if version_gte "${ARCHVERSION//-*/}" 4.7.0 || is_rhel "$ARCHVERSION"; then # Use new .klp.rela. sections KPATCH_MODULE=false if version_gte "${ARCHVERSION//-*/}" 4.9.0 || is_rhel "$ARCHVERSION"; then KPATCH_LDFLAGS="--unique=.parainstructions --unique=.altinstructions" fi fi else # No support for livepatch in the kernel. Kpatch core module is needed. find_core_symvers || die "unable to find Module.symvers for kpatch core module" KBUILD_EXTRA_SYMBOLS="$SYMVERSFILE" fi # optional kernel configs: CONFIG_PARAVIRT if grep -q "CONFIG_PARAVIRT=y" "$CONFIGFILE"; then CONFIG_PARAVIRT=1 else CONFIG_PARAVIRT=0 fi # unsupported kernel option checking: CONFIG_DEBUG_INFO_SPLIT grep -q "CONFIG_DEBUG_INFO_SPLIT=y" "$CONFIGFILE" && die "kernel option 'CONFIG_DEBUG_INFO_SPLIT' not supported" echo "Testing patch file(s)" cd "$SRCDIR" || die apply_patches remove_patches cp -LR "$DATADIR/patch" "$TEMPDIR" || die if [[ "$ARCH" = "ppc64le" ]]; then ARCH_KCFLAGS="-mcmodel=large -fplugin=$PLUGINDIR/ppc64le-plugin.so" fi export KCFLAGS="-I$DATADIR/patch -ffunction-sections -fdata-sections $ARCH_KCFLAGS" echo "Reading special section data" find_special_section_data if [[ $DEBUG -ge 4 ]]; then export KPATCH_GCC_DEBUG=1 fi echo "Building original kernel" ./scripts/setlocalversion --save-scmversion || die make mrproper 2>&1 | logger || die cp -f "$CONFIGFILE" "$SRCDIR/.config" unset KPATCH_GCC_TEMPDIR # $TARGETS used as list, no quotes. # shellcheck disable=SC2086 CROSS_COMPILE="$TOOLSDIR/kpatch-gcc " make "-j$CPUS" $TARGETS 2>&1 | logger || die echo "Building patched kernel" apply_patches mkdir -p "$TEMPDIR/orig" "$TEMPDIR/patched" KPATCH_GCC_TEMPDIR="$TEMPDIR" export KPATCH_GCC_TEMPDIR # $TARGETS used as list, no quotes. # shellcheck disable=SC2086 CROSS_COMPILE="$TOOLSDIR/kpatch-gcc " \ KBUILD_MODPOST_WARN=1 \ make "-j$CPUS" $TARGETS 2>&1 | logger || die grep "undefined reference" "$LOGFILE" | grep -qv kpatch_shadow && die grep "undefined!" "$LOGFILE" | grep -qv kpatch_shadow && die if [[ ! -e "$TEMPDIR/changed_objs" ]]; then die "no changed objects found" fi # Read as words, no quotes. # shellcheck disable=SC2013 for i in $(cat "$TEMPDIR/changed_objs") do mkdir -p "$TEMPDIR/patched/$(dirname "$i")" || die cp -f "$SRCDIR/$i" "$TEMPDIR/patched/$i" || die done echo "Extracting new and modified ELF sections" # If no kpatch module name was provided on the command line: # - For single input .patch, use the patch filename # - For multiple input .patches, use "patch" # - Prefix with "kpatch" or "livepatch" accordingly if [[ -z "$MODNAME" ]] ; then if [[ "${#PATCH_LIST[@]}" -eq 1 ]]; then MODNAME="$(basename "${PATCH_LIST[0]}")" if [[ "$MODNAME" =~ \.patch$ ]] || [[ "$MODNAME" =~ \.diff$ ]]; then MODNAME="${MODNAME%.*}" fi else MODNAME="patch" fi if "$KPATCH_MODULE"; then MODNAME="kpatch-$MODNAME" else MODNAME="livepatch-$MODNAME" fi MODNAME="$(module_name_string "$MODNAME")" fi FILES="$(cat "$TEMPDIR/changed_objs")" cd "$TEMPDIR" || die mkdir output declare -a objnames CHANGED=0 ERROR=0 for i in $FILES; do # In RHEL 7 based kernels, copy_user_64.o misuses the .fixup section, # which confuses create-diff-object. It's fine to skip it, it's an # assembly file anyway. [[ "$DISTRO" = rhel ]] || [[ "$DISTRO" = centos ]] || [[ "$DISTRO" = ol ]] && \ [[ "$i" = arch/x86/lib/copy_user_64.o ]] && continue [[ "$i" = usr/initramfs_data.o ]] && continue mkdir -p "output/$(dirname "$i")" cd "$SRCDIR" || die find_kobj "$i" if [[ "$KOBJFILE" = vmlinux ]]; then KOBJFILE="$VMLINUX" else KOBJFILE="$TEMPDIR/module/$KOBJFILE" fi cd "$TEMPDIR" || die if [[ -e "orig/$i" ]]; then # create-diff-object orig.o patched.o kernel-object output.o Module.symvers patch-mod-name "$TOOLSDIR"/create-diff-object "orig/$i" "patched/$i" "$KOBJFILE" \ "output/$i" "$SRCDIR/Module.symvers" "${MODNAME//-/_}" 2>&1 | logger 1 check_pipe_status create-diff-object # create-diff-object returns 3 if no functional change is found [[ "$rc" -eq 0 ]] || [[ "$rc" -eq 3 ]] || ERROR="$((ERROR + 1))" if [[ "$rc" -eq 0 ]]; then [[ -n "$ERROR_IF_DIFF" ]] && die "$ERROR_IF_DIFF" CHANGED=1 objnames[${#objnames[@]}]="$KOBJFILE" fi else cp -f "patched/$i" "output/$i" objnames[${#objnames[@]}]="$KOBJFILE" fi done if [[ "$ERROR" -ne 0 ]]; then die "$ERROR error(s) encountered" fi if [[ "$CHANGED" -eq 0 ]]; then die "no functional changes found" fi echo -n "Patched objects:" for i in $(echo "${objnames[@]}" | tr ' ' '\n' | sort -u | tr '\n' ' ') do echo -n " $(basename "$i")" done echo export KCFLAGS="-I$DATADIR/patch $ARCH_KCFLAGS" if "$KPATCH_MODULE"; then export KCPPFLAGS="-D__KPATCH_MODULE__" fi echo "Building patch module: $MODNAME.ko" if [[ ! -z "$UBUNTU_KERNEL" ]]; then # UBUNTU: add UTS_UBUNTU_RELEASE_ABI to utsrelease.h after regenerating it UBUNTU_ABI="${ARCHVERSION#*-}" UBUNTU_ABI="${UBUNTU_ABI%-*}" echo "#define UTS_UBUNTU_RELEASE_ABI $UBUNTU_ABI" >> "$SRCDIR"/include/generated/utsrelease.h fi cd "$TEMPDIR/output" || die # $KPATCH_LDFLAGS and result of find used as list, no quotes. # shellcheck disable=SC2086,SC2046 ld -r $KPATCH_LDFLAGS -o ../patch/tmp_output.o $(find . -name "*.o") 2>&1 | logger || die if "$KPATCH_MODULE"; then # Add .kpatch.checksum for kpatch script md5sum ../patch/tmp_output.o | awk '{printf "%s\0", $1}' > checksum.tmp || die objcopy --add-section .kpatch.checksum=checksum.tmp --set-section-flags .kpatch.checksum=alloc,load,contents,readonly ../patch/tmp_output.o || die rm -f checksum.tmp "$TOOLSDIR"/create-kpatch-module "$TEMPDIR"/patch/tmp_output.o "$TEMPDIR"/patch/output.o 2>&1 | logger 1 check_pipe_status create-kpatch-module else cp "$TEMPDIR"/patch/tmp_output.o "$TEMPDIR"/patch/output.o || die fi cd "$TEMPDIR/patch" || die KPATCH_BUILD="$SRCDIR" KPATCH_NAME="$MODNAME" \ KBUILD_EXTRA_SYMBOLS="$KBUILD_EXTRA_SYMBOLS" \ KPATCH_LDFLAGS="$KPATCH_LDFLAGS" \ make 2>&1 | logger || die if ! "$KPATCH_MODULE"; then if [[ -z "$KPATCH_LDFLAGS" ]]; then extra_flags="--no-klp-arch-sections" fi cp "$TEMPDIR/patch/$MODNAME.ko" "$TEMPDIR/patch/tmp.ko" || die "$TOOLSDIR"/create-klp-module $extra_flags "$TEMPDIR/patch/tmp.ko" "$TEMPDIR/patch/$MODNAME.ko" 2>&1 | logger 1 check_pipe_status create-klp-module fi cp -f "$TEMPDIR/patch/$MODNAME.ko" "$BASE" || die [[ "$DEBUG" -eq 0 ]] && rm -f "$LOGFILE" echo "SUCCESS" kpatch-0.5.0/kpatch-build/kpatch-elf.c000066400000000000000000000474711321664017000175340ustar00rootroot00000000000000/* * kpatch-elf.c * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ /* * This file provides a common api to create, inspect, and manipulate * kpatch_elf objects. */ #include #include #include #include #include #include #include #include #include "kpatch-elf.h" /******************* * Helper functions ******************/ char *status_str(enum status status) { switch(status) { case NEW: return "NEW"; case CHANGED: return "CHANGED"; case SAME: return "SAME"; default: ERROR("status_str"); } /* never reached */ return NULL; } int is_rela_section(struct section *sec) { return (sec->sh.sh_type == SHT_RELA); } int is_text_section(struct section *sec) { return (sec->sh.sh_type == SHT_PROGBITS && (sec->sh.sh_flags & SHF_EXECINSTR)); } int is_debug_section(struct section *sec) { char *name; if (is_rela_section(sec)) name = sec->base->name; else name = sec->name; return !strncmp(name, ".debug_", 7) || !strncmp(name, ".eh_frame", 9); } struct section *find_section_by_index(struct list_head *list, unsigned int index) { struct section *sec; list_for_each_entry(sec, list, list) if (sec->index == index) return sec; return NULL; } struct section *find_section_by_name(struct list_head *list, const char *name) { struct section *sec; list_for_each_entry(sec, list, list) if (!strcmp(sec->name, name)) return sec; return NULL; } struct symbol *find_symbol_by_index(struct list_head *list, size_t index) { struct symbol *sym; list_for_each_entry(sym, list, list) if (sym->index == index) return sym; return NULL; } struct symbol *find_symbol_by_name(struct list_head *list, const char *name) { struct symbol *sym; list_for_each_entry(sym, list, list) if (sym->name && !strcmp(sym->name, name)) return sym; return NULL; } struct rela *find_rela_by_offset(struct section *relasec, unsigned int offset) { struct rela *rela; list_for_each_entry(rela, &relasec->relas, list) { if (rela->offset == offset) return rela; } return NULL; } /* returns the offset of the string in the string table */ int offset_of_string(struct list_head *list, char *name) { struct string *string; int index = 0; /* try to find string in the string list */ list_for_each_entry(string, list, list) { if (!strcmp(string->name, name)) return index; index += strlen(string->name) + 1; } /* allocate a new string */ ALLOC_LINK(string, list); string->name = name; return index; } void kpatch_create_rela_list(struct kpatch_elf *kelf, struct section *sec) { int rela_nr, index = 0, skip = 0; struct rela *rela; unsigned int symndx; /* find matching base (text/data) section */ sec->base = find_section_by_index(&kelf->sections, sec->sh.sh_info); if (!sec->base) ERROR("can't find base section for rela section %s", sec->name); /* create reverse link from base section to this rela section */ sec->base->rela = sec; rela_nr = sec->sh.sh_size / sec->sh.sh_entsize; log_debug("\n=== rela list for %s (%d entries) ===\n", sec->base->name, rela_nr); if (is_debug_section(sec)) { log_debug("skipping rela listing for .debug_* section\n"); skip = 1; } /* read and store the rela entries */ while (rela_nr--) { ALLOC_LINK(rela, &sec->relas); if (!gelf_getrela(sec->data, index, &rela->rela)) ERROR("gelf_getrela"); index++; rela->type = GELF_R_TYPE(rela->rela.r_info); rela->addend = rela->rela.r_addend; rela->offset = rela->rela.r_offset; symndx = GELF_R_SYM(rela->rela.r_info); rela->sym = find_symbol_by_index(&kelf->symbols, symndx); if (!rela->sym) ERROR("could not find rela entry symbol\n"); if (rela->sym->sec && (rela->sym->sec->sh.sh_flags & SHF_STRINGS)) { rela->string = rela->sym->sec->data->d_buf + rela->addend; if (!rela->string) ERROR("could not lookup rela string for %s+%d", rela->sym->name, rela->addend); } if (skip) continue; log_debug("offset %d, type %d, %s %s %d", rela->offset, rela->type, rela->sym->name, (rela->addend < 0)?"-":"+", abs(rela->addend)); if (rela->string) log_debug(" (string = %s)", rela->string); log_debug("\n"); } } void kpatch_create_section_list(struct kpatch_elf *kelf) { Elf_Scn *scn = NULL; struct section *sec; size_t shstrndx, sections_nr; if (elf_getshdrnum(kelf->elf, §ions_nr)) ERROR("elf_getshdrnum"); /* * elf_getshdrnum() includes section index 0 but elf_nextscn * doesn't return that section so subtract one. */ sections_nr--; if (elf_getshdrstrndx(kelf->elf, &shstrndx)) ERROR("elf_getshdrstrndx"); log_debug("=== section list (%zu) ===\n", sections_nr); while (sections_nr--) { ALLOC_LINK(sec, &kelf->sections); scn = elf_nextscn(kelf->elf, scn); if (!scn) ERROR("scn NULL"); if (!gelf_getshdr(scn, &sec->sh)) ERROR("gelf_getshdr"); sec->name = elf_strptr(kelf->elf, shstrndx, sec->sh.sh_name); if (!sec->name) ERROR("elf_strptr"); sec->data = elf_getdata(scn, NULL); if (!sec->data) ERROR("elf_getdata"); sec->index = elf_ndxscn(scn); log_debug("ndx %02d, data %p, size %zu, name %s\n", sec->index, sec->data->d_buf, sec->data->d_size, sec->name); } /* Sanity check, one more call to elf_nextscn() should return NULL */ if (elf_nextscn(kelf->elf, scn)) ERROR("expected NULL"); } void kpatch_create_symbol_list(struct kpatch_elf *kelf) { struct section *symtab; struct symbol *sym; int symbols_nr, index = 0; symtab = find_section_by_name(&kelf->sections, ".symtab"); if (!symtab) ERROR("missing symbol table"); symbols_nr = symtab->sh.sh_size / symtab->sh.sh_entsize; log_debug("\n=== symbol list (%d entries) ===\n", symbols_nr); while (symbols_nr--) { ALLOC_LINK(sym, &kelf->symbols); sym->index = index; if (!gelf_getsym(symtab->data, index, &sym->sym)) ERROR("gelf_getsym"); index++; sym->name = elf_strptr(kelf->elf, symtab->sh.sh_link, sym->sym.st_name); if (!sym->name) ERROR("elf_strptr"); sym->type = GELF_ST_TYPE(sym->sym.st_info); sym->bind = GELF_ST_BIND(sym->sym.st_info); if (sym->sym.st_shndx > SHN_UNDEF && sym->sym.st_shndx < SHN_LORESERVE) { sym->sec = find_section_by_index(&kelf->sections, sym->sym.st_shndx); if (!sym->sec) ERROR("couldn't find section for symbol %s\n", sym->name); if (sym->type == STT_SECTION) { sym->sec->secsym = sym; /* use the section name as the symbol name */ sym->name = sym->sec->name; } } log_debug("sym %02d, type %d, bind %d, ndx %02d, name %s", sym->index, sym->type, sym->bind, sym->sym.st_shndx, sym->name); if (sym->sec) log_debug(" -> %s", sym->sec->name); log_debug("\n"); } } /* Check which functions have fentry/mcount calls; save this info for later use. */ static void kpatch_find_func_profiling_calls(struct kpatch_elf *kelf) { struct symbol *sym; struct rela *rela; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type != STT_FUNC || !sym->sec || !sym->sec->rela) continue; #ifdef __powerpc__ list_for_each_entry(rela, &sym->sec->rela->relas, list) { if (!strcmp(rela->sym->name, "_mcount")) { sym->has_func_profiling = 1; break; } } #else rela = list_first_entry(&sym->sec->rela->relas, struct rela, list); if (rela->type != R_X86_64_NONE || strcmp(rela->sym->name, "__fentry__")) continue; sym->has_func_profiling = 1; #endif } } struct kpatch_elf *kpatch_elf_open(const char *name) { Elf *elf; int fd; struct kpatch_elf *kelf; struct section *sec; fd = open(name, O_RDONLY); if (fd == -1) ERROR("open"); elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); if (!elf) ERROR("elf_begin"); kelf = malloc(sizeof(*kelf)); if (!kelf) ERROR("malloc"); memset(kelf, 0, sizeof(*kelf)); INIT_LIST_HEAD(&kelf->sections); INIT_LIST_HEAD(&kelf->symbols); INIT_LIST_HEAD(&kelf->strings); /* read and store section, symbol entries from file */ kelf->elf = elf; kelf->fd = fd; kpatch_create_section_list(kelf); kpatch_create_symbol_list(kelf); /* for each rela section, read and store the rela entries */ list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec)) continue; INIT_LIST_HEAD(&sec->relas); kpatch_create_rela_list(kelf, sec); } kpatch_find_func_profiling_calls(kelf); return kelf; } void kpatch_dump_kelf(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; struct rela *rela; if (loglevel > DEBUG) return; printf("\n=== Sections ===\n"); list_for_each_entry(sec, &kelf->sections, list) { printf("%02d %s (%s)", sec->index, sec->name, status_str(sec->status)); if (is_rela_section(sec)) { printf(", base-> %s\n", sec->base->name); /* skip .debug_* sections */ if (is_debug_section(sec)) goto next; printf("rela section expansion\n"); list_for_each_entry(rela, &sec->relas, list) { printf("sym %d, offset %d, type %d, %s %s %d\n", rela->sym->index, rela->offset, rela->type, rela->sym->name, (rela->addend < 0)?"-":"+", abs(rela->addend)); } } else { if (sec->sym) printf(", sym-> %s", sec->sym->name); if (sec->secsym) printf(", secsym-> %s", sec->secsym->name); if (sec->rela) printf(", rela-> %s", sec->rela->name); } next: printf("\n"); } printf("\n=== Symbols ===\n"); list_for_each_entry(sym, &kelf->symbols, list) { printf("sym %02d, type %d, bind %d, ndx %02d, name %s (%s)", sym->index, sym->type, sym->bind, sym->sym.st_shndx, sym->name, status_str(sym->status)); if (sym->sec && (sym->type == STT_FUNC || sym->type == STT_OBJECT)) printf(" -> %s", sym->sec->name); printf("\n"); } } int is_null_sym(struct symbol *sym) { return !strlen(sym->name); } int is_file_sym(struct symbol *sym) { return sym->type == STT_FILE; } int is_local_func_sym(struct symbol *sym) { return sym->bind == STB_LOCAL && sym->type == STT_FUNC; } int is_local_sym(struct symbol *sym) { return sym->bind == STB_LOCAL; } void print_strtab(char *buf, size_t size) { int i; for (i = 0; i < size; i++) { if (buf[i] == 0) printf("\\0"); else printf("%c",buf[i]); } } void kpatch_create_shstrtab(struct kpatch_elf *kelf) { struct section *shstrtab, *sec; size_t size, offset, len; char *buf; shstrtab = find_section_by_name(&kelf->sections, ".shstrtab"); if (!shstrtab) ERROR("find_section_by_name"); /* determine size of string table */ size = 1; /* for initial NULL terminator */ list_for_each_entry(sec, &kelf->sections, list) size += strlen(sec->name) + 1; /* include NULL terminator */ /* allocate data buffer */ buf = malloc(size); if (!buf) ERROR("malloc"); memset(buf, 0, size); /* populate string table and link with section header */ offset = 1; list_for_each_entry(sec, &kelf->sections, list) { len = strlen(sec->name) + 1; sec->sh.sh_name = offset; memcpy(buf + offset, sec->name, len); offset += len; } if (offset != size) ERROR("shstrtab size mismatch"); shstrtab->data->d_buf = buf; shstrtab->data->d_size = size; if (loglevel <= DEBUG) { printf("shstrtab: "); print_strtab(buf, size); printf("\n"); list_for_each_entry(sec, &kelf->sections, list) printf("%s @ shstrtab offset %d\n", sec->name, sec->sh.sh_name); } } void kpatch_create_strtab(struct kpatch_elf *kelf) { struct section *strtab; struct symbol *sym; size_t size = 0, offset = 0, len; char *buf; strtab = find_section_by_name(&kelf->sections, ".strtab"); if (!strtab) ERROR("find_section_by_name"); /* determine size of string table */ list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type == STT_SECTION) continue; size += strlen(sym->name) + 1; /* include NULL terminator */ } /* allocate data buffer */ buf = malloc(size); if (!buf) ERROR("malloc"); memset(buf, 0, size); /* populate string table and link with section header */ list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type == STT_SECTION) { sym->sym.st_name = 0; continue; } len = strlen(sym->name) + 1; sym->sym.st_name = offset; memcpy(buf + offset, sym->name, len); offset += len; } if (offset != size) ERROR("shstrtab size mismatch"); strtab->data->d_buf = buf; strtab->data->d_size = size; if (loglevel <= DEBUG) { printf("strtab: "); print_strtab(buf, size); printf("\n"); list_for_each_entry(sym, &kelf->symbols, list) printf("%s @ strtab offset %d\n", sym->name, sym->sym.st_name); } } void kpatch_create_symtab(struct kpatch_elf *kelf) { struct section *symtab; struct symbol *sym; char *buf; size_t size; int nr = 0, offset = 0, nr_local = 0; symtab = find_section_by_name(&kelf->sections, ".symtab"); if (!symtab) ERROR("find_section_by_name"); /* count symbols */ list_for_each_entry(sym, &kelf->symbols, list) nr++; /* create new symtab buffer */ size = nr * symtab->sh.sh_entsize; buf = malloc(size); if (!buf) ERROR("malloc"); memset(buf, 0, size); offset = 0; list_for_each_entry(sym, &kelf->symbols, list) { memcpy(buf + offset, &sym->sym, symtab->sh.sh_entsize); offset += symtab->sh.sh_entsize; if (is_local_sym(sym)) nr_local++; } symtab->data->d_buf = buf; symtab->data->d_size = size; /* update symtab section header */ symtab->sh.sh_link = find_section_by_name(&kelf->sections, ".strtab")->index; symtab->sh.sh_info = nr_local; } struct section *create_section_pair(struct kpatch_elf *kelf, char *name, int entsize, int nr) { char *relaname; struct section *sec, *relasec; int size = entsize * nr; relaname = malloc(strlen(name) + strlen(".rela") + 1); if (!relaname) ERROR("malloc"); strcpy(relaname, ".rela"); strcat(relaname, name); /* allocate text section resources */ ALLOC_LINK(sec, &kelf->sections); sec->name = name; /* set data */ sec->data = malloc(sizeof(*sec->data)); if (!sec->data) ERROR("malloc"); sec->data->d_buf = malloc(size); if (!sec->data->d_buf) ERROR("malloc"); memset(sec->data->d_buf, 0, size); sec->data->d_size = size; sec->data->d_type = ELF_T_BYTE; /* set section header */ sec->sh.sh_type = SHT_PROGBITS; sec->sh.sh_entsize = entsize; sec->sh.sh_addralign = 8; sec->sh.sh_flags = SHF_ALLOC; sec->sh.sh_size = size; /* allocate rela section resources */ ALLOC_LINK(relasec, &kelf->sections); relasec->name = relaname; relasec->base = sec; INIT_LIST_HEAD(&relasec->relas); /* set data, buffers generated by kpatch_rebuild_rela_section_data() */ relasec->data = malloc(sizeof(*relasec->data)); if (!relasec->data) ERROR("malloc"); /* set section header */ relasec->sh.sh_type = SHT_RELA; relasec->sh.sh_entsize = sizeof(GElf_Rela); relasec->sh.sh_addralign = 8; /* set text rela section pointer */ sec->rela = relasec; return sec; } void kpatch_remove_and_free_section(struct kpatch_elf *kelf, char *secname) { struct section *sec, *safesec; struct rela *rela, *saferela; list_for_each_entry_safe(sec, safesec, &kelf->sections, list) { if (strcmp(secname, sec->name)) continue; if (is_rela_section(sec)) { list_for_each_entry_safe(rela, saferela, &sec->relas, list) { list_del(&rela->list); memset(rela, 0, sizeof(*rela)); free(rela); } } /* * Remove the STT_SECTION symbol from the symtab, * otherwise when we remove the section we'll end up * with UNDEF section symbols in the symtab. */ if (!is_rela_section(sec) && sec->secsym) { list_del(&sec->secsym->list); memset(sec->secsym, 0, sizeof(*sec->secsym)); free(sec->secsym); } list_del(&sec->list); memset(sec, 0, sizeof(*sec)); free(sec); } } void kpatch_reindex_elements(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; int index; index = 1; /* elf write function handles NULL section 0 */ list_for_each_entry(sec, &kelf->sections, list) sec->index = index++; index = 0; list_for_each_entry(sym, &kelf->symbols, list) { sym->index = index++; if (sym->sec) sym->sym.st_shndx = sym->sec->index; else if (sym->sym.st_shndx != SHN_ABS && sym->sym.st_shndx != SHN_LIVEPATCH) sym->sym.st_shndx = SHN_UNDEF; } } void kpatch_rebuild_rela_section_data(struct section *sec) { struct rela *rela; int nr = 0, index = 0, size; GElf_Rela *relas; list_for_each_entry(rela, &sec->relas, list) nr++; size = nr * sizeof(*relas); relas = malloc(size); if (!relas) ERROR("malloc"); sec->data->d_buf = relas; sec->data->d_size = size; /* d_type remains ELF_T_RELA */ sec->sh.sh_size = size; list_for_each_entry(rela, &sec->relas, list) { relas[index].r_offset = rela->offset; relas[index].r_addend = rela->addend; relas[index].r_info = GELF_R_INFO(rela->sym->index, rela->type); index++; } /* sanity check, index should equal nr */ if (index != nr) ERROR("size mismatch in rebuilt rela section"); } void kpatch_write_output_elf(struct kpatch_elf *kelf, Elf *elf, char *outfile) { int fd; struct section *sec; Elf *elfout; GElf_Ehdr eh, ehout; Elf_Scn *scn; Elf_Data *data; GElf_Shdr sh; /* TODO make this argv */ fd = creat(outfile, 0777); if (fd == -1) ERROR("creat"); elfout = elf_begin(fd, ELF_C_WRITE, NULL); if (!elfout) ERROR("elf_begin"); if (!gelf_newehdr(elfout, gelf_getclass(kelf->elf))) ERROR("gelf_newehdr"); if (!gelf_getehdr(elfout, &ehout)) ERROR("gelf_getehdr"); if (!gelf_getehdr(elf, &eh)) ERROR("gelf_getehdr"); memset(&ehout, 0, sizeof(ehout)); ehout.e_ident[EI_DATA] = eh.e_ident[EI_DATA]; ehout.e_machine = eh.e_machine; ehout.e_type = eh.e_type; ehout.e_version = EV_CURRENT; ehout.e_shstrndx = find_section_by_name(&kelf->sections, ".shstrtab")->index; /* add changed sections */ list_for_each_entry(sec, &kelf->sections, list) { scn = elf_newscn(elfout); if (!scn) ERROR("elf_newscn"); data = elf_newdata(scn); if (!data) ERROR("elf_newdata"); if (!elf_flagdata(data, ELF_C_SET, ELF_F_DIRTY)) ERROR("elf_flagdata"); data->d_type = sec->data->d_type; data->d_buf = sec->data->d_buf; data->d_size = sec->data->d_size; if(!gelf_getshdr(scn, &sh)) ERROR("gelf_getshdr"); sh = sec->sh; if (!gelf_update_shdr(scn, &sh)) ERROR("gelf_update_shdr"); } if (!gelf_update_ehdr(elfout, &ehout)) ERROR("gelf_update_ehdr"); if (elf_update(elfout, ELF_C_WRITE) < 0) { printf("%s\n",elf_errmsg(-1)); ERROR("elf_update"); } } /* * While this is a one-shot program without a lot of proper cleanup in case * of an error, this function serves a debugging purpose: to break down and * zero data structures we shouldn't be accessing anymore. This should * help cause an immediate and obvious issue when a logic error leads to * accessing data that is not intended to be accessed past a particular point. */ void kpatch_elf_teardown(struct kpatch_elf *kelf) { struct section *sec, *safesec; struct symbol *sym, *safesym; struct rela *rela, *saferela; list_for_each_entry_safe(sec, safesec, &kelf->sections, list) { if (is_rela_section(sec)) { list_for_each_entry_safe(rela, saferela, &sec->relas, list) { memset(rela, 0, sizeof(*rela)); free(rela); } memset(sec, 0, sizeof(*sec)); free(sec); } } list_for_each_entry_safe(sym, safesym, &kelf->symbols, list) { memset(sym, 0, sizeof(*sym)); free(sym); } INIT_LIST_HEAD(&kelf->sections); INIT_LIST_HEAD(&kelf->symbols); } void kpatch_elf_free(struct kpatch_elf *kelf) { elf_end(kelf->elf); close(kelf->fd); memset(kelf, 0, sizeof(*kelf)); free(kelf); } kpatch-0.5.0/kpatch-build/kpatch-elf.h000066400000000000000000000104341321664017000175260ustar00rootroot00000000000000/* * kpatch-elf.h * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #ifndef _KPATCH_ELF_H_ #define _KPATCH_ELF_H_ #include #include "list.h" #include "log.h" #define KLP_SYM_PREFIX ".klp.sym." #define KLP_RELASEC_PREFIX ".klp.rela." #define KLP_ARCH_PREFIX ".klp.arch." #define SHF_RELA_LIVEPATCH 0x00100000 #define SHN_LIVEPATCH 0xff20 /******************* * Data structures * ****************/ struct section; struct symbol; struct rela; enum status { NEW, CHANGED, SAME }; struct section { struct list_head list; struct section *twin; GElf_Shdr sh; Elf_Data *data; char *name; int index; enum status status; int include; int ignore; int grouped; union { struct { /* if (is_rela_section()) */ struct section *base; struct list_head relas; }; struct { /* else */ struct section *rela; struct symbol *secsym, *sym; }; }; }; struct symbol { struct list_head list; struct symbol *twin; struct section *sec; GElf_Sym sym; char *name; int index; unsigned char bind, type; enum status status; union { int include; /* used in the patched elf */ int strip; /* used in the output elf */ }; int has_func_profiling; }; struct rela { struct list_head list; GElf_Rela rela; struct symbol *sym; unsigned int type; int addend; int offset; char *string; }; struct string { struct list_head list; char *name; }; struct kpatch_elf { Elf *elf; struct list_head sections; struct list_head symbols; struct list_head strings; int fd; }; /******************* * Helper functions ******************/ char *status_str(enum status status); int is_rela_section(struct section *sec); int is_text_section(struct section *sec); int is_debug_section(struct section *sec); struct section *find_section_by_index(struct list_head *list, unsigned int index); struct section *find_section_by_name(struct list_head *list, const char *name); struct symbol *find_symbol_by_index(struct list_head *list, size_t index); struct symbol *find_symbol_by_name(struct list_head *list, const char *name); struct rela *find_rela_by_offset(struct section *relasec, unsigned int offset); #define ALLOC_LINK(_new, _list) \ { \ (_new) = malloc(sizeof(*(_new))); \ if (!(_new)) \ ERROR("malloc"); \ memset((_new), 0, sizeof(*(_new))); \ INIT_LIST_HEAD(&(_new)->list); \ list_add_tail(&(_new)->list, (_list)); \ } int offset_of_string(struct list_head *list, char *name); #ifndef R_PPC64_ENTRY #define R_PPC64_ENTRY 118 #endif /************* * Functions * **********/ void kpatch_create_rela_list(struct kpatch_elf *kelf, struct section *sec); void kpatch_create_section_list(struct kpatch_elf *kelf); void kpatch_create_symbol_list(struct kpatch_elf *kelf); struct kpatch_elf *kpatch_elf_open(const char *name); void kpatch_dump_kelf(struct kpatch_elf *kelf); int is_null_sym(struct symbol *sym); int is_file_sym(struct symbol *sym); int is_local_func_sym(struct symbol *sym); int is_local_sym(struct symbol *sym); void print_strtab(char *buf, size_t size); void kpatch_create_shstrtab(struct kpatch_elf *kelf); void kpatch_create_strtab(struct kpatch_elf *kelf); void kpatch_create_symtab(struct kpatch_elf *kelf); struct section *create_section_pair(struct kpatch_elf *kelf, char *name, int entsize, int nr); void kpatch_remove_and_free_section(struct kpatch_elf *kelf, char *secname); void kpatch_reindex_elements(struct kpatch_elf *kelf); void kpatch_rebuild_rela_section_data(struct section *sec); void kpatch_write_output_elf(struct kpatch_elf *kelf, Elf *elf, char *outfile); void kpatch_elf_teardown(struct kpatch_elf *kelf); void kpatch_elf_free(struct kpatch_elf *kelf); #endif /* _KPATCH_ELF_H_ */ kpatch-0.5.0/kpatch-build/kpatch-gcc000077500000000000000000000032471321664017000172750ustar00rootroot00000000000000#!/bin/bash if [[ ${KPATCH_GCC_DEBUG:-0} -ne 0 ]]; then set -o xtrace fi TOOLCHAINCMD="$1" shift if [[ -z "$KPATCH_GCC_TEMPDIR" ]]; then exec "$TOOLCHAINCMD" "$@" fi declare -a args=("$@") if [[ "$TOOLCHAINCMD" = "gcc" ]] ; then while [ "$#" -gt 0 ]; do if [ "$1" = "-o" ]; then obj="$2" # skip copying the temporary .o files created by # recordmcount.pl [[ "$obj" = */.tmp_mc_*.o ]] && break; [[ "$obj" = */.tmp_*.o ]] && obj="${obj/.tmp_/}" case "$obj" in *.mod.o|\ *built-in.o|\ vmlinux.o|\ .tmp_kallsyms1.o|\ .tmp_kallsyms2.o|\ init/version.o|\ arch/x86/boot/version.o|\ arch/x86/boot/compressed/eboot.o|\ arch/x86/boot/header.o|\ arch/x86/boot/compressed/efi_stub_64.o|\ arch/x86/boot/compressed/piggy.o|\ kernel/system_certificates.o|\ arch/x86/vdso/*|\ arch/x86/entry/vdso/*|\ drivers/firmware/efi/libstub/*|\ arch/powerpc/kernel/prom_init.o|\ .*.o|\ */.lib_exports.o) break ;; *.o) mkdir -p "$KPATCH_GCC_TEMPDIR/orig/$(dirname "$obj")" [[ -e "$obj" ]] && cp -f "$obj" "$KPATCH_GCC_TEMPDIR/orig/$obj" echo "$obj" >> "$KPATCH_GCC_TEMPDIR/changed_objs" break ;; *) break ;; esac fi shift done elif [[ "$TOOLCHAINCMD" = "ld" ]] ; then while [ "$#" -gt 0 ]; do if [ "$1" = "-o" ]; then obj="$2" case "$obj" in *.ko) mkdir -p "$KPATCH_GCC_TEMPDIR/module/$(dirname "$obj")" cp -f "$obj" "$KPATCH_GCC_TEMPDIR/module/$obj" break ;; .tmp_vmlinux*|vmlinux) args+=(--warn-unresolved-symbols) break ;; *) break ;; esac fi shift done fi exec "$TOOLCHAINCMD" "${args[@]}" kpatch-0.5.0/kpatch-build/kpatch-intermediate.h000066400000000000000000000025601321664017000214330ustar00rootroot00000000000000/* * kpatch-intermediate.h * * Structures for intermediate .kpatch.* sections * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #ifndef _KPATCH_INTERMEDIATE_H_ #define _KPATCH_INTERMEDIATE_H_ /* For .kpatch.{symbols,relocations,arch} sections */ struct kpatch_symbol { unsigned long src; unsigned long pos; unsigned char bind, type; char *name; char *objname; /* object to which this sym belongs */ }; struct kpatch_relocation { unsigned long dest; unsigned int type; int addend; int external; char *objname; /* object to which this rela applies to */ struct kpatch_symbol *ksym; }; struct kpatch_arch { unsigned long sec; char *objname; }; #endif /* _KPATCH_INTERMEDIATE_H_ */ kpatch-0.5.0/kpatch-build/list.h000066400000000000000000000150171321664017000164650ustar00rootroot00000000000000/* * list.h * * Adapted from http://www.mcs.anl.gov/~kazutomo/list/list.h which is a * userspace port of the Linux kernel implementation in include/linux/list.h * * Thus licensed as GPLv2. * * Copyright (C) 2014 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #ifndef _LIST_H #define _LIST_H /** * Get offset of a member */ #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) /** * Casts a member of a structure out to the containing structure * @param ptr the pointer to the member. * @param type the type of the container struct this is embedded in. * @param member the name of the member within the struct. * */ #define container_of(ptr, type, member) ({ \ const typeof( ((type *)0)->member ) *__mptr = (ptr); \ (type *)( (char *)__mptr - offsetof(type,member) );}) /* * These are non-NULL pointers that will result in page faults * under normal circumstances, used to verify that nobody uses * non-initialized list entries. */ #define LIST_POISON1 ((void *) 0x00100100) #define LIST_POISON2 ((void *) 0x00200200) /** * Simple doubly linked list implementation. * * Some of the internal functions ("__xxx") are useful when * manipulating whole lists rather than single entries, as * sometimes we already know the next/prev entries and we can * generate better code by using them directly rather than * using the generic single-entry routines. */ struct list_head { struct list_head *next, *prev; }; #define LIST_HEAD_INIT(name) { &(name), &(name) } #define LIST_HEAD(name) \ struct list_head name = LIST_HEAD_INIT(name) #define INIT_LIST_HEAD(ptr) do { \ (ptr)->next = (ptr); (ptr)->prev = (ptr); \ } while (0) /* * Insert a new entry between two known consecutive entries. * * This is only for internal list manipulation where we know * the prev/next entries already! */ static inline void __list_add(struct list_head *new, struct list_head *prev, struct list_head *next) { next->prev = new; new->next = next; new->prev = prev; prev->next = new; } /** * list_add - add a new entry * @new: new entry to be added * @head: list head to add it after * * Insert a new entry after the specified head. * This is good for implementing stacks. */ static inline void list_add(struct list_head *new, struct list_head *head) { __list_add(new, head, head->next); } /** * list_add_tail - add a new entry * @new: new entry to be added * @head: list head to add it before * * Insert a new entry before the specified head. * This is useful for implementing queues. */ static inline void list_add_tail(struct list_head *new, struct list_head *head) { __list_add(new, head->prev, head); } /* * Delete a list entry by making the prev/next entries * point to each other. * * This is only for internal list manipulation where we know * the prev/next entries already! */ static inline void __list_del(struct list_head * prev, struct list_head * next) { next->prev = prev; prev->next = next; } /** * list_del - deletes entry from list. * @entry: the element to delete from the list. * Note: list_empty on entry does not return true after this, the entry is * in an undefined state. */ static inline void list_del(struct list_head *entry) { __list_del(entry->prev, entry->next); entry->next = LIST_POISON1; entry->prev = LIST_POISON2; } /** * list_replace - replace old entry by new one * @old : the element to be replaced * @new : the new element to insert * * If @old was empty, it will be overwritten. */ static inline void list_replace(struct list_head *old, struct list_head *new) { new->next = old->next; new->next->prev = new; new->prev = old->prev; new->prev->next = new; } #define list_entry(ptr, type, member) \ container_of(ptr, type, member) /** * list_first_entry - get the first element from a list * @ptr: the list head to take the element from. * @type: the type of the struct this is embedded in. * @member: the name of the list_struct within the struct. * * Note, that list is expected to be not empty. */ #define list_first_entry(ptr, type, member) \ list_entry((ptr)->next, type, member) /** * list_next_entry - get the next element in list * @pos: the type * to cursor * @member: the name of the list_struct within the struct. */ #define list_next_entry(pos, member) \ list_entry((pos)->member.next, typeof(*(pos)), member) /** * list_for_each_entry - iterate over list of given type * @pos: the type * to use as a loop counter. * @head: the head for your list. * @member: the name of the list_struct within the struct. */ #define list_for_each_entry(pos, head, member) \ for (pos = list_entry((head)->next, typeof(*pos), member); \ &pos->member != (head); \ pos = list_entry(pos->member.next, typeof(*pos), member)) /** * list_for_each_entry_continue - continue iteration over list of given type * @pos: the type * to use as a loop cursor. * @head: the head for your list. * @member: the name of the list_struct within the struct. * * Continue to iterate over list of given type, continuing after * the current position. */ #define list_for_each_entry_continue(pos, head, member) \ for (pos = list_next_entry(pos, member); \ &pos->member != (head); \ pos = list_next_entry(pos, member)) /** * list_for_each_entry_safe - iterate over list of given type safe against removal of list entry * @pos: the type * to use as a loop counter. * @n: another type * to use as temporary storage * @head: the head for your list. * @member: the name of the list_struct within the struct. */ #define list_for_each_entry_safe(pos, n, head, member) \ for (pos = list_entry((head)->next, typeof(*pos), member), \ n = list_entry(pos->member.next, typeof(*pos), member); \ &pos->member != (head); \ pos = n, n = list_entry(n->member.next, typeof(*n), member)) #endif /* _LIST_H_ */ kpatch-0.5.0/kpatch-build/log.h000066400000000000000000000011371321664017000162710ustar00rootroot00000000000000#ifndef _LOG_H_ #define _LOG_H_ #include /* Files that include log.h must define loglevel and childobj */ extern enum loglevel loglevel; extern char *childobj; #define ERROR(format, ...) \ error(1, 0, "ERROR: %s: %s: %d: " format, childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__) #define log_debug(format, ...) log(DEBUG, format, ##__VA_ARGS__) #define log_normal(format, ...) log(NORMAL, "%s: " format, childobj, ##__VA_ARGS__) #define log(level, format, ...) \ ({ \ if (loglevel <= (level)) \ printf(format, ##__VA_ARGS__); \ }) enum loglevel { DEBUG, NORMAL }; #endif /* _LOG_H_ */ kpatch-0.5.0/kpatch-build/lookup.c000066400000000000000000000242631321664017000170210ustar00rootroot00000000000000/* * lookup.c * * This file contains functions that assist in the reading and searching * the symbol table of an ELF object. * * Copyright (C) 2014 Seth Jennings * Copyright (C) 2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #include #include #include #include #include #include #include #include #include #include #include #include #include "lookup.h" #include "log.h" struct object_symbol { unsigned long value; unsigned long size; char *name; int type, bind, skip; }; struct export_symbol { char *name; char *objname; }; struct lookup_table { int obj_nr, exp_nr; struct object_symbol *obj_syms; struct export_symbol *exp_syms; struct object_symbol *local_syms; int vmlinux; }; #define for_each_obj_symbol(ndx, iter, table) \ for (ndx = 0, iter = table->obj_syms; ndx < table->obj_nr; ndx++, iter++) #define for_each_obj_symbol_continue(ndx, iter, table) \ for (iter = table->obj_syms + ndx; ndx < table->obj_nr; ndx++, iter++) #define for_each_exp_symbol(ndx, iter, table) \ for (ndx = 0, iter = table->exp_syms; ndx < table->exp_nr; ndx++, iter++) static int discarded_sym(struct lookup_table *table, struct sym_compare_type *sym) { if (table->vmlinux && sym->name && (!strncmp(sym->name, "__exitcall_", 11) || !strncmp(sym->name, "__brk_reservation_fn_", 21) || !strncmp(sym->name, "__func_stack_frame_non_standard_", 32))) return 1; return 0; } static int locals_match(struct lookup_table *table, int idx, struct sym_compare_type *child_locals) { struct sym_compare_type *child; struct object_symbol *sym; int i, found; i = idx + 1; for_each_obj_symbol_continue(i, sym, table) { if (sym->type == STT_FILE) break; if (sym->bind != STB_LOCAL) continue; if (sym->type != STT_FUNC && sym->type != STT_OBJECT) continue; found = 0; for (child = child_locals; child->name; child++) { if (child->type == sym->type && !strcmp(child->name, sym->name)) { found = 1; break; } } if (!found) return 0; } for (child = child_locals; child->name; child++) { /* * Symbols which get discarded at link time are missing from * the lookup table, so skip them. */ if (discarded_sym(table, child)) continue; found = 0; i = idx + 1; for_each_obj_symbol_continue(i, sym, table) { if (sym->type == STT_FILE) break; if (sym->bind != STB_LOCAL) continue; if (sym->type != STT_FUNC && sym->type != STT_OBJECT) continue; if (!strcmp(child->name, sym->name)) { found = 1; break; } } if (!found) return 0; } return 1; } static void find_local_syms(struct lookup_table *table, char *hint, struct sym_compare_type *child_locals) { struct object_symbol *sym; int i; if (!child_locals) return; for_each_obj_symbol(i, sym, table) { if (sym->type != STT_FILE) continue; if (strcmp(hint, sym->name)) continue; if (!locals_match(table, i, child_locals)) continue; if (table->local_syms) ERROR("find_local_syms for %s: found_dup", hint); table->local_syms = sym; } if (!table->local_syms) ERROR("find_local_syms for %s: found_none", hint); } static void obj_read(struct lookup_table *table, char *path) { Elf *elf; int fd, i, len; Elf_Scn *scn; GElf_Shdr sh; GElf_Sym sym; Elf_Data *data; char *name; struct object_symbol *mysym; size_t shstrndx; if ((fd = open(path, O_RDONLY, 0)) < 0) ERROR("open"); elf_version(EV_CURRENT); elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); if (!elf) { printf("%s\n", elf_errmsg(-1)); ERROR("elf_begin"); } if (elf_getshdrstrndx(elf, &shstrndx)) ERROR("elf_getshdrstrndx"); scn = NULL; while ((scn = elf_nextscn(elf, scn))) { if (!gelf_getshdr(scn, &sh)) ERROR("gelf_getshdr"); name = elf_strptr(elf, shstrndx, sh.sh_name); if (!name) ERROR("elf_strptr scn"); if (!strcmp(name, ".symtab")) break; } if (!scn) ERROR(".symtab section not found"); data = elf_getdata(scn, NULL); if (!data) ERROR("elf_getdata"); len = sh.sh_size / sh.sh_entsize; table->obj_syms = malloc(len * sizeof(*table->obj_syms)); if (!table->obj_syms) ERROR("malloc table.obj_syms"); memset(table->obj_syms, 0, len * sizeof(*table->obj_syms)); table->obj_nr = len; for_each_obj_symbol(i, mysym, table) { if (!gelf_getsym(data, i, &sym)) ERROR("gelf_getsym"); if (sym.st_shndx == SHN_UNDEF) { mysym->skip = 1; continue; } name = elf_strptr(elf, sh.sh_link, sym.st_name); if(!name) ERROR("elf_strptr sym"); mysym->value = sym.st_value; mysym->size = sym.st_size; mysym->type = GELF_ST_TYPE(sym.st_info); mysym->bind = GELF_ST_BIND(sym.st_info); mysym->name = strdup(name); if (!mysym->name) ERROR("strdup"); } close(fd); elf_end(elf); } /* Strip the path and replace '-' with '_' */ static char *make_modname(char *modname) { char *cur; if (!modname) return NULL; cur = modname; while (*cur != '\0') { if (*cur == '-') *cur = '_'; cur++; } return basename(modname); } static void symvers_read(struct lookup_table *table, char *path) { FILE *file; unsigned int crc, i = 0; char name[256], mod[256], export[256]; char *objname, *symname; if ((file = fopen(path, "r")) < 0) ERROR("fopen"); while (fscanf(file, "%x %s %s %s\n", &crc, name, mod, export) != EOF) table->exp_nr++; table->exp_syms = malloc(table->exp_nr * sizeof(*table->exp_syms)); if (!table->exp_syms) ERROR("malloc table.exp_syms"); memset(table->exp_syms, 0, table->exp_nr * sizeof(*table->exp_syms)); rewind(file); while (fscanf(file, "%x %s %s %s\n", &crc, name, mod, export) != EOF) { symname = strdup(name); if (!symname) perror("strdup"); objname = strdup(mod); if (!objname) perror("strdup"); /* Modifies objname in-place */ objname = make_modname(objname); table->exp_syms[i].name = symname; table->exp_syms[i].objname = objname; i++; } fclose(file); } struct lookup_table *lookup_open(char *obj_path, char *symvers_path, char *hint, struct sym_compare_type *locals) { struct lookup_table *table; table = malloc(sizeof(*table)); if (!table) ERROR("malloc table"); memset(table, 0, sizeof(*table)); table->vmlinux = !strncmp(basename(obj_path), "vmlinux", 7); obj_read(table, obj_path); symvers_read(table, symvers_path); find_local_syms(table, hint, locals); return table; } void lookup_close(struct lookup_table *table) { free(table->obj_syms); free(table->exp_syms); free(table); } int lookup_local_symbol(struct lookup_table *table, char *name, struct lookup_result *result) { struct object_symbol *sym; unsigned long pos = 0; int i, match = 0, in_file = 0; if (!table->local_syms) return 1; memset(result, 0, sizeof(*result)); for_each_obj_symbol(i, sym, table) { if (sym->skip) continue; if (sym->bind == STB_LOCAL && !strcmp(sym->name, name)) pos++; if (table->local_syms == sym) { in_file = 1; continue; } if (!in_file) continue; if (sym->type == STT_FILE) break; if (sym->bind == STB_LOCAL && !strcmp(sym->name, name)) { match = 1; break; } } if (!match) return 1; result->pos = pos; result->value = sym->value; result->size = sym->size; return 0; } int lookup_global_symbol(struct lookup_table *table, char *name, struct lookup_result *result) { struct object_symbol *sym; int i; memset(result, 0, sizeof(*result)); for_each_obj_symbol(i, sym, table) { if (!sym->skip && (sym->bind == STB_GLOBAL || sym->bind == STB_WEAK) && !strcmp(sym->name, name)) { result->value = sym->value; result->size = sym->size; result->pos = 0; /* always 0 for global symbols */ return 0; } } return 1; } int lookup_is_exported_symbol(struct lookup_table *table, char *name) { struct export_symbol *sym, *match = NULL; int i; for_each_exp_symbol(i, sym, table) { if (!strcmp(sym->name, name)) { if (match) ERROR("duplicate exported symbol found for %s", name); match = sym; } } return !!match; } /* * lookup_exported_symbol_objname - find the object/module an exported * symbol belongs to. */ char *lookup_exported_symbol_objname(struct lookup_table *table, char *name) { struct export_symbol *sym, *match = NULL; int i; for_each_exp_symbol(i, sym, table) { if (!strcmp(sym->name, name)) { if (match) ERROR("duplicate exported symbol found for %s", name); match = sym; } } if (match) return match->objname; return NULL; } #if 0 /* for local testing */ static void find_this(struct lookup_table *table, char *sym, char *hint) { struct lookup_result result; if (hint) lookup_local_symbol(table, sym, hint, &result); else lookup_global_symbol(table, sym, &result); printf("%s %s w/ %s hint at 0x%016lx len %lu pos %lu\n", hint ? "local" : "global", sym, hint ? hint : "no", result.value, result.size, result.pos); } int main(int argc, char **argv) { struct lookup_table *vmlinux; if (argc != 2) return 1; vmlinux = lookup_open(argv[1]); printf("printk is%s exported\n", lookup_is_exported_symbol(vmlinux, "__fentry__") ? "" : " not"); printf("meminfo_proc_show is%s exported\n", lookup_is_exported_symbol(vmlinux, "meminfo_proc_show") ? "" : " not"); find_this(vmlinux, "printk", NULL); find_this(vmlinux, "pages_to_scan_show", "ksm.c"); find_this(vmlinux, "pages_to_scan_show", "huge_memory.c"); find_this(vmlinux, "pages_to_scan_show", NULL); /* should fail */ lookup_close(vmlinux); return 0; } #endif kpatch-0.5.0/kpatch-build/lookup.h000066400000000000000000000014221321664017000170160ustar00rootroot00000000000000#ifndef _LOOKUP_H_ #define _LOOKUP_H_ struct lookup_table; struct lookup_result { unsigned long value; unsigned long size; unsigned long pos; }; struct sym_compare_type { char *name; int type; }; struct lookup_table *lookup_open(char *obj_path, char *symvers_path, char *hint, struct sym_compare_type *locals); void lookup_close(struct lookup_table *table); int lookup_local_symbol(struct lookup_table *table, char *name, struct lookup_result *result); int lookup_global_symbol(struct lookup_table *table, char *name, struct lookup_result *result); int lookup_is_exported_symbol(struct lookup_table *table, char *name); char *lookup_exported_symbol_objname(struct lookup_table *table, char *name); #endif /* _LOOKUP_H_ */ kpatch-0.5.0/kpatch/000077500000000000000000000000001321664017000142405ustar00rootroot00000000000000kpatch-0.5.0/kpatch/Makefile000066400000000000000000000002211321664017000156730ustar00rootroot00000000000000include ../Makefile.inc all: install: all $(INSTALL) -d $(SBINDIR) $(INSTALL) kpatch $(SBINDIR) uninstall: $(RM) $(SBINDIR)/kpatch clean: kpatch-0.5.0/kpatch/kpatch000077500000000000000000000324601321664017000154450ustar00rootroot00000000000000#!/bin/bash # # kpatch hot patch module management script # # Copyright (C) 2014 Seth Jennings # Copyright (C) 2014 Josh Poimboeuf # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, # 02110-1301, USA. # This is the kpatch user script that manages installing, loading, and # displaying information about kernel patch modules installed on the system. INSTALLDIR=/var/lib/kpatch SCRIPTDIR="$(readlink -f "$(dirname "$(type -p "$0")")")" VERSION="0.4.0" POST_ENABLE_WAIT=5 # seconds POST_SIGNAL_WAIT=60 # seconds usage_cmd() { printf ' %-20s\n %s\n' "$1" "$2" >&2 } usage () { # ATTENTION ATTENTION ATTENTION ATTENTION ATTENTION ATTENTION # When changing this, please also update the man page. Thanks! echo "usage: kpatch []" >&2 echo >&2 echo "Valid commands:" >&2 usage_cmd "install [-k|--kernel-version=] " "install patch module to the initrd to be loaded at boot" usage_cmd "uninstall [-k|--kernel-version=] " "uninstall patch module from the initrd" echo >&2 usage_cmd "load --all" "load all installed patch modules into the running kernel" usage_cmd "load " "load patch module into the running kernel" usage_cmd "unload --all" "unload all patch modules from the running kernel" usage_cmd "unload " "unload patch module from the running kernel" echo >&2 usage_cmd "info " "show information about a patch module" echo >&2 usage_cmd "list" "list installed patch modules" echo >&2 usage_cmd "signal" "signal/poke any process stalling the current patch transition" echo >&2 usage_cmd "version" "display the kpatch version" exit 1 } warn() { echo "kpatch: $*" >&2 } die() { warn "$@" exit 1 } __find_module () { MODULE="$1" [[ -f "$MODULE" ]] && return MODULE="$INSTALLDIR/$(uname -r)/$1" [[ -f "$MODULE" ]] && return return 1 } mod_name () { MODNAME="$(basename "$1")" MODNAME="${MODNAME%.ko}" MODNAME="${MODNAME//-/_}" } find_module () { arg="$1" if [[ "$arg" =~ \.ko ]]; then __find_module "$arg" || return 1 mod_name "$MODULE" return else for i in "$INSTALLDIR/$(uname -r)"/*; do mod_name "$i" if [[ "$MODNAME" == "$arg" ]]; then MODULE="$i" return fi done fi return 1 } find_core_module() { COREMOD="$SCRIPTDIR"/../kmod/core/kpatch.ko [[ -f "$COREMOD" ]] && return COREMOD="/usr/local/lib/kpatch/$(uname -r)/kpatch.ko" [[ -f "$COREMOD" ]] && return COREMOD="/usr/lib/kpatch/$(uname -r)/kpatch.ko" [[ -f "$COREMOD" ]] && return COREMOD="/usr/local/lib/modules/$(uname -r)/extra/kpatch/kpatch.ko" [[ -f "$COREMOD" ]] && return COREMOD="/usr/lib/modules/$(uname -r)/extra/kpatch/kpatch.ko" [[ -f "$COREMOD" ]] && return return 1 } core_loaded () { grep -q -e "T klp_register_patch" -e "T kpatch_register" /proc/kallsyms } get_module_name () { readelf -p .gnu.linkonce.this_module "$1" | grep '\[.*\]' | awk '{print $3}' } init_sysfs_var() { # If the kernel is configured with CONFIG_LIVEPATCH, use that. # Otherwise, use the kpatch core module (kpatch.ko). if [[ -e /sys/kernel/livepatch ]] ; then # livepatch ABI SYSFS="/sys/kernel/livepatch" elif [[ -e /sys/kernel/kpatch/patches ]] ; then # kpatch pre-0.4 ABI SYSFS="/sys/kernel/kpatch/patches" else # kpatch 0.4 ABI SYSFS="/sys/kernel/kpatch" fi } verify_module_checksum () { modname="$(get_module_name "$1")" [[ -z "$modname" ]] && return 1 checksum="$(readelf -p .kpatch.checksum "$1" 2>&1 | grep '\[.*\]' | awk '{print $3}')" # Fail checksum match only if both exist and diverge if [[ ! -z "$checksum" ]] && [[ -e "$SYSFS/${modname}/checksum" ]] ; then sysfs_checksum="$(cat "$SYSFS/${modname}/checksum")" [[ "$checksum" == "$sysfs_checksum" ]] || return 1 fi return 0 } in_transition() { local moddir="$SYSFS/$1" [[ $(cat "$moddir/transition" 2>/dev/null) == "1" ]] && return 0 return 1 } is_stalled() { local module="$1" local pid="$2" local patch_enabled local patch_state patch_enabled="$(cat "$SYSFS/$module/enabled" 2>/dev/null)" patch_state="$(cat "/proc/$pid/patch_state" 2>/dev/null)" # No patch transition in progress [[ "$patch_state" == "-1" ]] && return 1 [[ -z "$patch_enabled" ]] || [[ -z "$patch_state" ]] && return 1 # Stalls can be determined if the process state does not match # the transition target (ie, "enabled" and "patched", "disabled" # and "unpatched"). The state value enumerations match, so we # can just compare them directly: [[ "$patch_enabled" != "$patch_state" ]] && return 0 return 1 } get_transition_patch() { local module local modname for module in "$SYSFS"/*; do modname=$(basename "$module") if in_transition "$modname" ; then echo "$modname" return fi done } show_stalled_processes() { local module local proc_task local tid module=$(get_transition_patch) [[ -z "$module" ]] && return echo "" echo "Stalled processes:" for proc_task in /proc/[0-9]*/task/[0-9]*; do tid=${proc_task#*/task/} is_stalled "$module" "$tid" && echo "$tid $(cat "$proc_task"/comm 2>/dev/null)" done } signal_stalled_processes() { local module local proc_task local tid module=$(get_transition_patch) [[ -z "$module" ]] && return if [[ -e "/sys/kernel/livepatch/$module/signal" ]] ; then echo 1 > "/sys/kernel/livepatch/$module/signal" else for proc_task in /proc/[0-9]*/task/[0-9]*; do tid=${proc_task#*/task/} if is_stalled "$module" "$tid" ; then if [[ "$tid" -eq "$$" ]] ; then echo "skipping pid $tid $(cat "$proc_task"/comm 2>/dev/null)" else echo "signaling pid $tid $(cat "$proc_task"/comm 2>/dev/null)" kill -SIGSTOP "$tid" sleep .1 kill -SIGCONT "$tid" fi fi done fi } wait_for_patch_transition() { local module="$1" local i in_transition "$module" || return 0 echo "waiting (up to $POST_ENABLE_WAIT seconds) for patch transition to complete..." for (( i=0; i "${moddir}/enabled" || die "failed to re-enable module $modname" if ! wait_for_patch_transition "$modname" ; then echo "module $modname did not complete its transition, disabling..." echo 0 > "${moddir}/enabled" || die "failed to disable module $modname" wait_for_patch_transition "$modname" die "error: failed to re-enable module $modname (transition stalled), patch disabled" fi return else die "error: cannot re-enable patch module $modname, cannot verify checksum match" fi else die "error: module named $modname already loaded and enabled" fi fi echo "loading patch module: $module" local i=0 while true; do out="$(insmod "$module" 2>&1)" [[ -z "$out" ]] && break echo "$out" 1>&2 [[ ! "$out" =~ "Device or resource busy" ]] && die "failed to load module $module" # "Device or resource busy" means the activeness safety check # failed. Retry in a few seconds. i=$((i+1)) if [[ $i = 5 ]]; then die "failed to load module $module" break else warn "retrying..." sleep 2 fi done if ! wait_for_patch_transition "$modname" ; then echo "module $modname did not complete its transition, unloading..." unload_module "$modname" die "error: failed to load module $modname (transition stalled)" fi return 0 } unload_module () { PATCH="${1//-/_}" PATCH="${PATCH%.ko}" ENABLED="$SYSFS/$PATCH/enabled" [[ -e "$ENABLED" ]] || die "patch module $1 is not loaded" if [[ "$(cat "$ENABLED")" -eq 1 ]]; then echo "disabling patch module: $PATCH" echo 0 > "$ENABLED" || die "can't disable $PATCH" fi if ! wait_for_patch_transition "$PATCH" ; then die "error: failed to unload module $PATCH (transition stalled)" fi echo "unloading patch module: $PATCH" # ignore any error here because rmmod can fail if the module used # KPATCH_FORCE_UNSAFE. rmmod "$PATCH" 2> /dev/null || return 0 } get_module_version() { MODVER="$(modinfo -F vermagic "$1")" || return 1 MODVER="${MODVER/ */}" } unset MODULE # Initialize the $SYSFS var. This only works if the core module has been # loaded. Otherwise, the value of $SYSFS doesn't matter at this point anyway, # and we'll have to call this function again after loading it. init_sysfs_var [[ "$#" -lt 1 ]] && usage case "$1" in "load") [[ "$#" -ne 2 ]] && usage case "$2" in "--all") for i in "$INSTALLDIR/$(uname -r)"/*.ko; do [[ -e "$i" ]] || continue load_module "$i" || die "failed to load module $i" done ;; *) PATCH="$2" find_module "$PATCH" || die "can't find $PATCH" load_module "$MODULE" || die "failed to load module $PATCH" ;; esac ;; "unload") [[ "$#" -ne 2 ]] && usage case "$2" in "--all") for module in "$SYSFS"/*; do [[ -e "$module" ]] || continue unload_module "$(basename "$module")" || die "failed to unload module $module" done ;; *) unload_module "$(basename "$2")" || die "failed to unload module $2" ;; esac ;; "install") KVER="$(uname -r)" shift options="$(getopt -o k: -l "kernel-version:" -- "$@")" || die "getopt failed" eval set -- "$options" while [[ $# -gt 0 ]]; do case "$1" in -k|--kernel-version) KVER="$2" shift ;; --) [[ -z "$2" ]] && die "no module file specified" PATCH="$2" ;; esac shift done [[ ! -e "$PATCH" ]] && die "$PATCH doesn't exist" [[ "${PATCH: -3}" == ".ko" ]] || die "$PATCH isn't a .ko file" get_module_version "$PATCH" || die "modinfo failed" [[ "$KVER" != "$MODVER" ]] && die "invalid module version $MODVER for kernel $KVER" [[ -e "$INSTALLDIR/$KVER/$(basename "$PATCH")" ]] && die "$PATCH is already installed" echo "installing $PATCH ($KVER)" mkdir -p "$INSTALLDIR/$KVER" || die "failed to create install directory" cp -f "$PATCH" "$INSTALLDIR/$KVER" || die "failed to install module $PATCH" systemctl enable kpatch.service ;; "uninstall") KVER="$(uname -r)" shift options="$(getopt -o k: -l "kernel-version:" -- "$@")" || die "getopt failed" eval set -- "$options" while [[ $# -gt 0 ]]; do case "$1" in -k|--kernel-version) KVER="$2" shift ;; --) [[ -z "$2" ]] && die "no module file specified" PATCH="$2" [[ "$PATCH" != "$(basename "$PATCH")" ]] && die "please supply patch module name without path" ;; esac shift done MODULE="$INSTALLDIR/$KVER/$PATCH" if [[ ! -f "$MODULE" ]]; then mod_name "$PATCH" PATCHNAME="$MODNAME" for i in "$INSTALLDIR/$KVER"/*; do mod_name "$i" if [[ "$MODNAME" == "$PATCHNAME" ]]; then MODULE="$i" break fi done fi [[ ! -e "$MODULE" ]] && die "$PATCH is not installed for kernel $KVER" echo "uninstalling $PATCH ($KVER)" rm -f "$MODULE" || die "failed to uninstall module $PATCH" ;; "list") [[ "$#" -ne 1 ]] && usage echo "Loaded patch modules:" for module in "$SYSFS"/*; do if [[ -e "$module" ]]; then modname=$(basename "$module") if [[ "$(cat "$module/enabled" 2>/dev/null)" -eq 1 ]]; then in_transition "$modname" && state="enabling..." \ || state="enabled" else in_transition "$modname" && state="disabling..." \ || state="disabled" fi echo "$modname [$state]" fi done show_stalled_processes echo "" echo "Installed patch modules:" for kdir in "$INSTALLDIR"/*; do [[ -e "$kdir" ]] || continue for module in "$kdir"/*; do [[ -e "$module" ]] || continue mod_name "$module" echo "$MODNAME ($(basename "$kdir"))" done done ;; "info") [[ "$#" -ne 2 ]] && usage PATCH="$2" find_module "$PATCH" || die "can't find $PATCH" echo "Patch information for $PATCH:" modinfo "$MODULE" || die "failed to get info for module $PATCH" ;; "signal") [[ "$#" -ne 1 ]] && usage signal_stalled_processes ;; "help"|"-h"|"--help") usage ;; "version"|"-v"|"--version") echo "$VERSION" ;; *) echo "subcommand $1 not recognized" usage ;; esac kpatch-0.5.0/man/000077500000000000000000000000001321664017000135415ustar00rootroot00000000000000kpatch-0.5.0/man/Makefile000066400000000000000000000006311321664017000152010ustar00rootroot00000000000000include ../Makefile.inc all: kpatch.1.gz kpatch-build.1.gz kpatch.1.gz: kpatch.1 gzip -c -9 $< > $@ kpatch-build.1.gz: kpatch-build.1 gzip -c -9 $< > $@ install: all $(INSTALL) -d $(MANDIR) $(INSTALL) -m 644 kpatch.1.gz $(MANDIR) $(INSTALL) -m 644 kpatch-build.1.gz $(MANDIR) uninstall: $(RM) $(MANDIR)/kpatch.1* $(RM) $(MANDIR)/kpatch-build.1* clean: $(RM) kpatch.1.gz $(RM) kpatch-build.1.gz kpatch-0.5.0/man/kpatch-build.1000066400000000000000000000030231321664017000161700ustar00rootroot00000000000000.\" Manpage for kpatch-build. .\" Contact udoseidel@gmx.de to correct errors or typos. .TH man 1 "23 Mar 2014" "1.0" "kpatch-build man page" .SH NAME kpatch-build \- build script .SH SYNOPSIS kpatch-build [options] .SH DESCRIPTION This script takes a patch based on the version of the kernel currently running and creates a kernel module that will replace modified functions in the kernel such that the patched code takes effect. This script currently only works on Fedora and will need to be adapted to work on other distros. .SH OPTIONS -h|--help Show this help message -r|--sourcerpm Specify kernel source RPM -s|--sourcedir Specify kernel source directory -c|--config Specify kernel config file -v|--vmlinux Specify original vmlinux -t|--target Specify custom kernel build targets -d|--debug Keep scratch files in /tmp --skip-gcc-check Skips check that ensures that the system gcc version and the gcc version that built the kernel match. Skipping this check is not recommended, but is useful if the exact gcc version is not available or is not easily installed. Use only when confident that the two versions of gcc output identical objects for a given target. Otherwise, use of this option might result in unexpected changed objects being detected. .SH SEE ALSO kpatch(1) .SH BUGS No known bugs. .SH AUTHOR Udo Seidel (udoseidel@gmx.de) .SH COPYRIGHT Copyright (C) 2014: Seth Jennings , Copyright (C) 2013,2014: Josh Poimboeuf kpatch-0.5.0/man/kpatch.1000066400000000000000000000023271321664017000151010ustar00rootroot00000000000000.\" Manpage for kpatch. .\" Contact udoseidel@gmx.de to correct errors or typos. .TH man 1 "23 Mar 2014" "1.0" "kpatch man page" .SH NAME kpatch \- hot patch module management .SH SYNOPSIS kpatch [] .SH DESCRIPTION kpatch is a user script that manages installing, loading, and displaying information about kernel patch modules installed on the system. .SH COMMANDS install [-k|--kernel-version=] install patch module to the initrd to be loaded at boot uninstall [-k|--kernel-version=] uninstall patch module from the initrd load --all load all installed patch modules into the running kernel load load patch module into the running kernel unload --all unload all patch modules from the running kernel unload unload patch module from the running kernel info show information about a patch module list list installed patch modules version display the kpatch version .SH SEE ALSO kpatch-build(1) .SH BUGS No known bugs. .SH AUTHOR Udo Seidel (udoseidel@gmx.de) .SH COPYRIGHT Copyright (C) 2014: Seth Jennings and Josh Poimboeuf kpatch-0.5.0/test/000077500000000000000000000000001321664017000137455ustar00rootroot00000000000000kpatch-0.5.0/test/difftree.sh000077500000000000000000000052061321664017000160770ustar00rootroot00000000000000#!/bin/bash # The purpose of this test script is to determine if create-diff-object can # properly recognize object file equivalence when passed the same file for both # the original and patched objects. This verifies that create-diff-object is # correctly parsing, correlating, and comparing the different elements of the # object file. In practice, a situation similar to the test case occurs when a # commonly included header file changes, causing Make to rebuild many objects # that have no functional change. # This script requires a built kernel object tree to be in the kpatch cache # directory at $HOME/.kpatch/obj #set -x OBJDIR="$HOME/.kpatch/obj" SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" TEMPDIR=$(mktemp -d) RESULTSDIR="$TEMPDIR/results" VMVLINUX="/usr/lib/debug/lib/modules/$(uname -r)/vmlinux" # path for F20 if [[ ! -d $OBJDIR ]]; then echo "please run kpatch-build to populate the object tree in $OBJDIR" fi cd "$OBJDIR" || exit 1 for i in $(find * -name '*.o') do # copied from kpatch-build/kpatch-gcc; keep in sync case $i in *.mod.o|\ *built-in.o|\ vmlinux.o|\ .tmp_kallsyms1.o|\ .tmp_kallsyms2.o|\ init/version.o|\ arch/x86/boot/version.o|\ arch/x86/boot/compressed/eboot.o|\ arch/x86/boot/header.o|\ arch/x86/boot/compressed/efi_stub_64.o|\ arch/x86/boot/compressed/piggy.o|\ kernel/system_certificates.o|\ .*.o) continue ;; esac # skip objects that are the linked product of more than one object file [[ $(eu-readelf -s $i | grep FILE | wc -l) -ne 1 ]] && continue $SCRIPTDIR/../kpatch-build/create-diff-object $i $i /usr/lib/debug/lib/modules/$(uname -r)/vmlinux "$TEMPDIR/output.o" > "$TEMPDIR/log.txt" 2>&1 RETCODE=$? # expect RETCODE to be 3 indicating no change [[ $RETCODE -eq 3 ]] && continue # otherwise record error mkdir -p $RESULTSDIR/$(dirname $i) || exit 1 cp "$i" "$RESULTSDIR/$i" || exit 1 case $RETCODE in 139) echo "$i: segfault" | tee if [[ ! -e core ]]; then echo "no corefile, run "ulimit -c unlimited" to capture corefile" else mv core "$RESULTSDIR/$i.core" || exit 1 fi ;; 0) echo "$i: incorrectly detected change" mv "$TEMPDIR/log.txt" "$RESULTSDIR/$i.log" || exit 1 ;; 1|2) echo "$i: error code $RETCODE" mv "$TEMPDIR/log.txt" "$RESULTSDIR/$i.log" || exit 1 ;; *) exit 1 # script error ;; esac done rm -f "$TEMPDIR/log.txt" > /dev/null 2>&1 # try to group the errors together in some meaningful way cd "$RESULTSDIR" || exit 1 echo "" echo "Results:" for i in $(find * -iname '*.log') do echo $(cat $i | head -1 | cut -f2-3 -d':') done | sort | uniq -c | sort -n -r | tee "$TEMPDIR/results.log" echo "results are in $TEMPDIR" kpatch-0.5.0/test/integration/000077500000000000000000000000001321664017000162705ustar00rootroot00000000000000kpatch-0.5.0/test/integration/.gitignore000066400000000000000000000000301321664017000202510ustar00rootroot00000000000000test.log COMBINED.patch kpatch-0.5.0/test/integration/Makefile000066400000000000000000000017711321664017000177360ustar00rootroot00000000000000include /etc/os-release PATCH_DIR?=${ID}-${VERSION_ID} all: $(error please specify local or remote) local: slow remote: remote_slow slow: clean ./kpatch-test -d $(PATCH_DIR) $(PATCHES) quick: clean ./kpatch-test -d $(PATCH_DIR) --quick $(PATCHES) cached: ./kpatch-test -d $(PATCH_DIR) --cached $(PATCHES) clean: rm -f *.ko *.log COMBINED.patch check_host: ifndef SSH_HOST $(error SSH_HOST is undefined) endif SSH_USER ?= root remote_setup: check_host ssh $(SSH_USER)@$(SSH_HOST) exit ssh $(SSH_USER)@$(SSH_HOST) "ls kpatch-setup &> /dev/null" || \ (scp remote-setup $(SSH_USER)@$(SSH_HOST):kpatch-setup && \ ssh $(SSH_USER)@$(SSH_HOST) "./kpatch-setup") remote_sync: remote_setup ssh $(SSH_USER)@$(SSH_HOST) "rm -rf kpatch-test" rsync -Cavz --include=core $(shell readlink -f ../../..) $(SSH_USER)@$(SSH_HOST):kpatch-test ssh $(SSH_USER)@$(SSH_HOST) "cd kpatch-test/kpatch && make" remote_slow: remote_sync ssh $(SSH_USER)@$(SSH_HOST) "cd kpatch-test/kpatch/test/integration && make slow" kpatch-0.5.0/test/integration/centos-7/000077500000000000000000000000001321664017000177275ustar00rootroot00000000000000kpatch-0.5.0/test/integration/centos-7/README000066400000000000000000000000331321664017000206030ustar00rootroot000000000000003.10.0-327.36.3.el7.x86_64 kpatch-0.5.0/test/integration/centos-7/bug-table-section.patch000066400000000000000000000007711321664017000242610ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/proc_sysctl.c src/fs/proc/proc_sysctl.c --- src.orig/fs/proc/proc_sysctl.c 2017-09-22 15:27:21.698056175 -0400 +++ src/fs/proc/proc_sysctl.c 2017-09-22 15:27:21.769056469 -0400 @@ -266,6 +266,8 @@ void sysctl_head_put(struct ctl_table_he static struct ctl_table_header *sysctl_head_grab(struct ctl_table_header *head) { + if (jiffies == 0) + printk("kpatch-test: testing __bug_table section changes\n"); BUG_ON(!head); spin_lock(&sysctl_lock); if (!use_table(head)) kpatch-0.5.0/test/integration/centos-7/cmdline-string-LOADED.test000077500000000000000000000000511321664017000244740ustar00rootroot00000000000000#!/bin/bash grep kpatch=1 /proc/cmdline kpatch-0.5.0/test/integration/centos-7/cmdline-string.patch000066400000000000000000000006011321664017000236640ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2017-09-22 15:27:21.698056175 -0400 +++ src/fs/proc/cmdline.c 2017-09-22 15:27:22.955061380 -0400 @@ -5,7 +5,7 @@ static int cmdline_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "%s\n", saved_command_line); + seq_printf(m, "%s kpatch=1\n", saved_command_line); return 0; } kpatch-0.5.0/test/integration/centos-7/data-new-LOADED.test000077500000000000000000000000541321664017000232600ustar00rootroot00000000000000#!/bin/bash grep "kpatch: 5" /proc/meminfo kpatch-0.5.0/test/integration/centos-7/data-new.patch000066400000000000000000000013501321664017000224470ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-09-22 15:27:21.699056179 -0400 +++ src/fs/proc/meminfo.c 2017-09-22 15:27:24.102066130 -0400 @@ -20,6 +20,8 @@ void __attribute__((weak)) arch_report_m { } +static int foo = 5; + static int meminfo_proc_show(struct seq_file *m, void *v) { struct sysinfo i; @@ -106,6 +108,7 @@ static int meminfo_proc_show(struct seq_ #ifdef CONFIG_TRANSPARENT_HUGEPAGE "AnonHugePages: %8lu kB\n" #endif + "kpatch: %d" , K(i.totalram), K(i.freeram), @@ -167,6 +170,7 @@ static int meminfo_proc_show(struct seq_ ,K(global_page_state(NR_ANON_TRANSPARENT_HUGEPAGES) * HPAGE_PMD_NR) #endif + ,foo ); hugetlb_report_meminfo(m); kpatch-0.5.0/test/integration/centos-7/data-read-mostly.patch000066400000000000000000000004471321664017000241240ustar00rootroot00000000000000diff -Nupr src.orig/net/core/dev.c src/net/core/dev.c --- src.orig/net/core/dev.c 2017-09-22 15:27:21.759056428 -0400 +++ src/net/core/dev.c 2017-09-22 15:27:25.244070859 -0400 @@ -4012,6 +4012,7 @@ ncls: case RX_HANDLER_PASS: break; default: + printk("BUG!\n"); BUG(); } } kpatch-0.5.0/test/integration/centos-7/fixup-section.patch000066400000000000000000000007331321664017000235500ustar00rootroot00000000000000diff -Nupr src.orig/fs/readdir.c src/fs/readdir.c --- src.orig/fs/readdir.c 2017-09-22 15:27:21.658056010 -0400 +++ src/fs/readdir.c 2017-09-22 15:27:26.378075555 -0400 @@ -166,6 +166,8 @@ static int filldir(void * __buf, const c goto efault; } dirent = buf->current_dir; + if (dirent->d_ino == 12345678) + printk("kpatch-test: testing .fixup section changes\n"); if (__put_user(d_ino, &dirent->d_ino)) goto efault; if (__put_user(reclen, &dirent->d_reclen)) kpatch-0.5.0/test/integration/centos-7/gcc-constprop.patch000066400000000000000000000006471321664017000235400ustar00rootroot00000000000000diff -Nupr src.orig/kernel/time/timekeeping.c src/kernel/time/timekeeping.c --- src.orig/kernel/time/timekeeping.c 2017-09-22 15:27:21.602055778 -0400 +++ src/kernel/time/timekeeping.c 2017-09-22 15:27:27.522080292 -0400 @@ -877,6 +877,9 @@ void do_gettimeofday(struct timeval *tv) { struct timespec64 now; + if (!tv) + return; + getnstimeofday64(&now); tv->tv_sec = now.tv_sec; tv->tv_usec = now.tv_nsec/1000; kpatch-0.5.0/test/integration/centos-7/gcc-isra.patch000066400000000000000000000006541321664017000224450ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/proc_sysctl.c src/fs/proc/proc_sysctl.c --- src.orig/fs/proc/proc_sysctl.c 2017-09-22 15:27:21.698056175 -0400 +++ src/fs/proc/proc_sysctl.c 2017-09-22 15:27:28.670085046 -0400 @@ -24,6 +24,7 @@ void proc_sys_poll_notify(struct ctl_tab if (!poll) return; + printk("kpatch-test: testing gcc .isra function name mangling\n"); atomic_inc(&poll->event); wake_up_interruptible(&poll->wait); } kpatch-0.5.0/test/integration/centos-7/gcc-mangled-3.patch000066400000000000000000000006041321664017000232510ustar00rootroot00000000000000diff -Nupr src.orig/mm/slub.c src/mm/slub.c --- src.orig/mm/slub.c 2017-09-22 15:27:21.618055844 -0400 +++ src/mm/slub.c 2017-09-22 15:27:29.830089850 -0400 @@ -5528,6 +5528,9 @@ void get_slabinfo(struct kmem_cache *s, unsigned long nr_free = 0; int node; + if (!jiffies) + printk("slabinfo\n"); + for_each_online_node(node) { struct kmem_cache_node *n = get_node(s, node); kpatch-0.5.0/test/integration/centos-7/gcc-static-local-var-2.patch000066400000000000000000000014161321664017000250100ustar00rootroot00000000000000diff -Nupr src.orig/mm/mmap.c src/mm/mmap.c --- src.orig/mm/mmap.c 2017-09-22 15:27:21.618055844 -0400 +++ src/mm/mmap.c 2017-09-22 15:27:31.024094794 -0400 @@ -1677,6 +1677,7 @@ static inline int accountable_mapping(st return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE; } +#include "kpatch-macros.h" unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff, struct list_head *uf) @@ -1687,6 +1688,9 @@ unsigned long mmap_region(struct file *f struct rb_node **rb_link, *rb_parent; unsigned long charged = 0; + if (!jiffies) + printk("kpatch mmap foo\n"); + /* Check against address space limit. */ if (!may_expand_vm(mm, len >> PAGE_SHIFT)) { unsigned long nr_pages; kpatch-0.5.0/test/integration/centos-7/gcc-static-local-var-3.patch000066400000000000000000000006511321664017000250110ustar00rootroot00000000000000diff -Nupr src.orig/kernel/sys.c src/kernel/sys.c --- src.orig/kernel/sys.c 2017-09-22 15:27:21.601055773 -0400 +++ src/kernel/sys.c 2017-09-22 15:27:32.170099540 -0400 @@ -554,8 +554,15 @@ SYSCALL_DEFINE4(reboot, int, magic1, int return ret; } +void kpatch_bar(void) +{ + if (!jiffies) + printk("kpatch_foo\n"); +} + static void deferred_cad(struct work_struct *dummy) { + kpatch_bar(); kernel_restart(NULL); } kpatch-0.5.0/test/integration/centos-7/gcc-static-local-var-4.patch000066400000000000000000000010051321664017000250040ustar00rootroot00000000000000diff -Nupr src.orig/fs/aio.c src/fs/aio.c --- src.orig/fs/aio.c 2017-09-22 15:27:21.702056192 -0400 +++ src/fs/aio.c 2017-09-22 15:27:33.299104215 -0400 @@ -219,9 +219,16 @@ static int __init aio_setup(void) } __initcall(aio_setup); +void kpatch_aio_foo(void) +{ + if (!jiffies) + printk("kpatch aio foo\n"); +} + static void put_aio_ring_file(struct kioctx *ctx) { struct file *aio_ring_file = ctx->aio_ring_file; + kpatch_aio_foo(); if (aio_ring_file) { truncate_setsize(aio_ring_file->f_inode, 0); kpatch-0.5.0/test/integration/centos-7/gcc-static-local-var-4.test000077500000000000000000000001521321664017000246710ustar00rootroot00000000000000#!/bin/bash if $(nm kpatch-gcc-static-local-var-4.ko | grep -q free_ioctx); then exit 1 else exit 0 fi kpatch-0.5.0/test/integration/centos-7/gcc-static-local-var-5.patch000066400000000000000000000021571321664017000250160ustar00rootroot00000000000000diff -Nupr src.orig/kernel/audit.c src/kernel/audit.c --- src.orig/kernel/audit.c 2017-09-22 15:27:21.602055778 -0400 +++ src/kernel/audit.c 2017-09-22 15:27:34.429108894 -0400 @@ -205,6 +205,12 @@ void audit_panic(const char *message) } } +void kpatch_audit_foo(void) +{ + if (!jiffies) + printk("kpatch audit foo\n"); +} + static inline int audit_rate_check(void) { static unsigned long last_check = 0; @@ -215,6 +221,7 @@ static inline int audit_rate_check(void) unsigned long elapsed; int retval = 0; + kpatch_audit_foo(); if (!audit_rate_limit) return 1; spin_lock_irqsave(&lock, flags); @@ -234,6 +241,11 @@ static inline int audit_rate_check(void) return retval; } +noinline void kpatch_audit_check(void) +{ + audit_rate_check(); +} + /** * audit_log_lost - conditionally log lost audit message event * @message: the message stating reason for lost audit message @@ -282,6 +294,8 @@ static int audit_log_config_change(char struct audit_buffer *ab; int rc = 0; + kpatch_audit_check(); + ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_CONFIG_CHANGE); if (unlikely(!ab)) return rc; kpatch-0.5.0/test/integration/centos-7/gcc-static-local-var.patch000066400000000000000000000012151321664017000246460ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kernel/ldt.c src/arch/x86/kernel/ldt.c --- src.orig/arch/x86/kernel/ldt.c 2017-09-22 15:27:20.847052651 -0400 +++ src/arch/x86/kernel/ldt.c 2017-09-22 15:27:35.573113632 -0400 @@ -98,6 +98,12 @@ static inline int copy_ldt(mm_context_t return 0; } +void hi_there(void) +{ + if (!jiffies) + printk("hi there\n"); +} + /* * we do not have to muck with descriptors here, that is * done in switch_mm() as needed. @@ -107,6 +113,8 @@ int init_new_context(struct task_struct struct mm_struct *old_mm; int retval = 0; + hi_there(); + mutex_init(&mm->context.lock); mm->context.size = 0; old_mm = current->mm; kpatch-0.5.0/test/integration/centos-7/macro-hooks-LOADED.test000077500000000000000000000000731321664017000240030ustar00rootroot00000000000000#!/bin/bash [[ $(cat /proc/sys/fs/aio-max-nr) = 262144 ]] kpatch-0.5.0/test/integration/centos-7/macro-hooks.patch000066400000000000000000000013111321664017000231660ustar00rootroot00000000000000diff -Nupr src.orig/fs/aio.c src/fs/aio.c --- src.orig/fs/aio.c 2017-09-22 15:27:21.702056192 -0400 +++ src/fs/aio.c 2017-09-22 15:27:36.703118311 -0400 @@ -1683,6 +1683,20 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t return ret; } +static int aio_max_nr_orig; +void kpatch_load_aio_max_nr(void) +{ + aio_max_nr_orig = aio_max_nr; + aio_max_nr = 0x40000; +} +void kpatch_unload_aio_max_nr(void) +{ + aio_max_nr = aio_max_nr_orig; +} +#include "kpatch-macros.h" +KPATCH_LOAD_HOOK(kpatch_load_aio_max_nr); +KPATCH_UNLOAD_HOOK(kpatch_unload_aio_max_nr); + /* io_getevents: * Attempts to read at least min_nr events and up to nr events from * the completion queue for the aio_context specified by ctx_id. If kpatch-0.5.0/test/integration/centos-7/macro-printk.patch000066400000000000000000000111141321664017000233540ustar00rootroot00000000000000diff -Nupr src.orig/net/ipv4/fib_frontend.c src/net/ipv4/fib_frontend.c --- src.orig/net/ipv4/fib_frontend.c 2017-09-22 16:52:10.646110299 -0400 +++ src/net/ipv4/fib_frontend.c 2017-09-22 16:55:14.395870305 -0400 @@ -633,6 +633,7 @@ errout: return err; } +#include "kpatch-macros.h" static int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh) { struct net *net = sock_net(skb->sk); @@ -651,6 +652,7 @@ static int inet_rtm_newroute(struct sk_b } err = fib_table_insert(net, tb, &cfg); + KPATCH_PRINTK("[inet_rtm_newroute]: err is %d\n", err); errout: return err; } diff -Nupr src.orig/net/ipv4/fib_semantics.c src/net/ipv4/fib_semantics.c --- src.orig/net/ipv4/fib_semantics.c 2017-09-22 16:52:10.645110295 -0400 +++ src/net/ipv4/fib_semantics.c 2017-09-22 16:54:05.175584004 -0400 @@ -925,6 +925,7 @@ fib_convert_metrics(struct fib_info *fi, return 0; } +#include "kpatch-macros.h" struct fib_info *fib_create_info(struct fib_config *cfg) { int err; @@ -949,6 +950,7 @@ struct fib_info *fib_create_info(struct #endif err = -ENOBUFS; + KPATCH_PRINTK("[fib_create_info]: create error err is %d\n",err); if (fib_info_cnt >= fib_info_hash_size) { unsigned int new_size = fib_info_hash_size << 1; struct hlist_head *new_info_hash; @@ -969,6 +971,7 @@ struct fib_info *fib_create_info(struct if (!fib_info_hash_size) goto failure; } + KPATCH_PRINTK("[fib_create_info]: 2 create error err is %d\n",err); fi = kzalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL); if (fi == NULL) @@ -980,6 +983,7 @@ struct fib_info *fib_create_info(struct } else fi->fib_metrics = (u32 *) dst_default_metrics; fib_info_cnt++; + KPATCH_PRINTK("[fib_create_info]: 3 create error err is %d\n",err); fi->fib_net = net; fi->fib_protocol = cfg->fc_protocol; @@ -996,8 +1000,10 @@ struct fib_info *fib_create_info(struct if (!nexthop_nh->nh_pcpu_rth_output) goto failure; } endfor_nexthops(fi) + KPATCH_PRINTK("[fib_create_info]: 4 create error err is %d\n",err); err = fib_convert_metrics(fi, cfg); + KPATCH_PRINTK("[fib_create_info]: 5 create error err is %d\n",err); if (err) goto failure; @@ -1048,6 +1054,7 @@ struct fib_info *fib_create_info(struct nh->nh_weight = 1; #endif } + KPATCH_PRINTK("[fib_create_info]: 6 create error err is %d\n",err); if (fib_props[cfg->fc_type].error) { if (cfg->fc_gw || cfg->fc_oif || cfg->fc_mp) @@ -1065,6 +1072,7 @@ struct fib_info *fib_create_info(struct goto err_inval; } } + KPATCH_PRINTK("[fib_create_info]: 7 create error err is %d\n",err); if (cfg->fc_scope > RT_SCOPE_HOST) goto err_inval; @@ -1087,6 +1095,7 @@ struct fib_info *fib_create_info(struct goto failure; } endfor_nexthops(fi) } + KPATCH_PRINTK("[fib_create_info]: 8 create error err is %d\n",err); if (fi->fib_prefsrc) { if (cfg->fc_type != RTN_LOCAL || !cfg->fc_dst || @@ -1099,6 +1108,7 @@ struct fib_info *fib_create_info(struct fib_info_update_nh_saddr(net, nexthop_nh); fib_add_weight(fi, nexthop_nh); } endfor_nexthops(fi) + KPATCH_PRINTK("[fib_create_info]: 9 create error err is %d\n",err); fib_rebalance(fi); @@ -1110,6 +1120,7 @@ link_it: ofi->fib_treeref++; return ofi; } + KPATCH_PRINTK("[fib_create_info]: 10 create error err is %d\n",err); fi->fib_treeref++; atomic_inc(&fi->fib_clntref); @@ -1133,6 +1144,7 @@ link_it: hlist_add_head(&nexthop_nh->nh_hash, head); } endfor_nexthops(fi) spin_unlock_bh(&fib_info_lock); + KPATCH_PRINTK("[fib_create_info]: 11 create error err is %d\n",err); return fi; err_inval: @@ -1143,6 +1155,7 @@ failure: fi->fib_dead = 1; free_fib_info(fi); } + KPATCH_PRINTK("[fib_create_info]: 12 create error err is %d\n",err); return ERR_PTR(err); } diff -Nupr src.orig/net/ipv4/fib_trie.c src/net/ipv4/fib_trie.c --- src.orig/net/ipv4/fib_trie.c 2017-09-22 16:52:10.645110295 -0400 +++ src/net/ipv4/fib_trie.c 2017-09-22 16:55:39.940975963 -0400 @@ -1191,6 +1191,7 @@ static int fib_insert_alias(struct trie } /* Caller must hold RTNL. */ +#include "kpatch-macros.h" int fib_table_insert(struct net *net, struct fib_table *tb, struct fib_config *cfg) { @@ -1216,11 +1217,14 @@ int fib_table_insert(struct net *net, st if ((plen < KEYLENGTH) && (key << plen)) return -EINVAL; + KPATCH_PRINTK("[fib_table_insert]: start\n"); fi = fib_create_info(cfg); if (IS_ERR(fi)) { err = PTR_ERR(fi); + KPATCH_PRINTK("[fib_table_insert]: create error err is %d\n",err); goto err; } + KPATCH_PRINTK("[fib_table_insert]: cross\n"); l = fib_find_node(t, &tp, key); fa = l ? fib_find_alias(&l->leaf, slen, tos, fi->fib_priority) : NULL; kpatch-0.5.0/test/integration/centos-7/meminfo-cmdline-rebuild-SLOW-LOADED.test000077500000000000000000000001141321664017000270260ustar00rootroot00000000000000#!/bin/bash grep VMALLOCCHUNK /proc/meminfo && grep kpatch=1 /proc/cmdline kpatch-0.5.0/test/integration/centos-7/meminfo-cmdline-rebuild-SLOW.patch000066400000000000000000000022501321664017000262200ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2017-09-22 15:27:21.698056175 -0400 +++ src/fs/proc/cmdline.c 2017-09-22 15:27:37.842123028 -0400 @@ -5,7 +5,7 @@ static int cmdline_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "%s\n", saved_command_line); + seq_printf(m, "%s kpatch=1\n", saved_command_line); return 0; } diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-09-22 15:27:21.699056179 -0400 +++ src/fs/proc/meminfo.c 2017-09-22 15:27:37.843123032 -0400 @@ -99,7 +99,7 @@ static int meminfo_proc_show(struct seq_ "Committed_AS: %8lu kB\n" "VmallocTotal: %8lu kB\n" "VmallocUsed: %8lu kB\n" - "VmallocChunk: %8lu kB\n" + "VMALLOCCHUNK: %8lu kB\n" #ifdef CONFIG_MEMORY_FAILURE "HardwareCorrupted: %5lu kB\n" #endif diff -Nupr src.orig/include/linux/kernel.h src/include/linux/kernel.h --- src.orig/include/linux/kernel.h 2017-09-22 15:27:20.379050713 -0400 +++ src/include/linux/kernel.h 2017-09-22 15:27:37.843123032 -0400 @@ -2,6 +2,7 @@ #define _LINUX_KERNEL_H + #include #include #include kpatch-0.5.0/test/integration/centos-7/meminfo-init-FAIL.patch000066400000000000000000000006011321664017000240510ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-09-22 15:27:21.699056179 -0400 +++ src/fs/proc/meminfo.c 2017-09-22 15:27:40.130132502 -0400 @@ -191,6 +191,7 @@ static const struct file_operations memi static int __init proc_meminfo_init(void) { + printk("a\n"); proc_create("meminfo", 0, NULL, &meminfo_proc_fops); return 0; } kpatch-0.5.0/test/integration/centos-7/meminfo-init2-FAIL.patch000066400000000000000000000010421321664017000241330ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-09-22 15:27:21.699056179 -0400 +++ src/fs/proc/meminfo.c 2017-09-22 15:27:38.972127707 -0400 @@ -30,6 +30,7 @@ static int meminfo_proc_show(struct seq_ unsigned long pages[NR_LRU_LISTS]; int lru; + printk("a\n"); /* * display in kilobytes. */ @@ -191,6 +192,7 @@ static const struct file_operations memi static int __init proc_meminfo_init(void) { + printk("a\n"); proc_create("meminfo", 0, NULL, &meminfo_proc_fops); return 0; } kpatch-0.5.0/test/integration/centos-7/meminfo-string-LOADED.test000077500000000000000000000000551321664017000245170ustar00rootroot00000000000000#!/bin/bash grep VMALLOCCHUNK /proc/meminfo kpatch-0.5.0/test/integration/centos-7/meminfo-string.patch000066400000000000000000000007331321664017000237110ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-09-22 15:27:21.699056179 -0400 +++ src/fs/proc/meminfo.c 2017-09-22 15:27:41.274137239 -0400 @@ -99,7 +99,7 @@ static int meminfo_proc_show(struct seq_ "Committed_AS: %8lu kB\n" "VmallocTotal: %8lu kB\n" "VmallocUsed: %8lu kB\n" - "VmallocChunk: %8lu kB\n" + "VMALLOCCHUNK: %8lu kB\n" #ifdef CONFIG_MEMORY_FAILURE "HardwareCorrupted: %5lu kB\n" #endif kpatch-0.5.0/test/integration/centos-7/module-call-external.patch000066400000000000000000000017501321664017000247710ustar00rootroot00000000000000diff -Nupr src.orig/fs/nfsd/export.c src/fs/nfsd/export.c --- src.orig/fs/nfsd/export.c 2017-09-22 15:27:21.705056204 -0400 +++ src/fs/nfsd/export.c 2017-09-22 15:27:42.411141948 -0400 @@ -1184,6 +1184,8 @@ static void exp_flags(struct seq_file *m } } +extern char *kpatch_string(void); + static int e_show(struct seq_file *m, void *p) { struct cache_head *cp = p; @@ -1193,6 +1195,7 @@ static int e_show(struct seq_file *m, vo if (p == SEQ_START_TOKEN) { seq_puts(m, "# Version 1.1\n"); seq_puts(m, "# Path Client(Flags) # IPs\n"); + seq_puts(m, kpatch_string()); return 0; } diff -Nupr src.orig/net/netlink/af_netlink.c src/net/netlink/af_netlink.c --- src.orig/net/netlink/af_netlink.c 2017-09-22 15:27:21.754056407 -0400 +++ src/net/netlink/af_netlink.c 2017-09-22 15:27:42.412141952 -0400 @@ -3260,4 +3260,9 @@ panic: panic("netlink_init: Cannot allocate nl_table\n"); } +char *kpatch_string(void) +{ + return "# kpatch\n"; +} + core_initcall(netlink_proto_init); kpatch-0.5.0/test/integration/centos-7/module-kvm-fixup.patch000066400000000000000000000006711321664017000241650ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/vmx.c src/arch/x86/kvm/vmx.c --- src.orig/arch/x86/kvm/vmx.c 2017-09-22 15:27:20.853052676 -0400 +++ src/arch/x86/kvm/vmx.c 2017-09-22 15:27:43.583146801 -0400 @@ -10597,6 +10597,8 @@ static int vmx_check_intercept(struct kv struct x86_instruction_info *info, enum x86_intercept_stage stage) { + if (!jiffies) + printk("kpatch vmx_check_intercept\n"); return X86EMUL_CONTINUE; } kpatch-0.5.0/test/integration/centos-7/module-shadow.patch000066400000000000000000000016361321664017000235260ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/vmx.c src/arch/x86/kvm/vmx.c --- src.orig/arch/x86/kvm/vmx.c 2017-09-22 15:27:20.853052676 -0400 +++ src/arch/x86/kvm/vmx.c 2017-09-22 15:27:44.742151601 -0400 @@ -10581,10 +10581,20 @@ static void vmx_leave_nested(struct kvm_ * It should only be called before L2 actually succeeded to run, and when * vmcs01 is current (it doesn't leave_guest_mode() or switch vmcss). */ +#include "kpatch.h" static void nested_vmx_entry_failure(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, u32 reason, unsigned long qualification) { + int *kpatch; + + kpatch = kpatch_shadow_alloc(vcpu, "kpatch", sizeof(*kpatch), + GFP_KERNEL); + if (kpatch) { + kpatch_shadow_get(vcpu, "kpatch"); + kpatch_shadow_free(vcpu, "kpatch"); + } + load_vmcs12_host_state(vcpu, vmcs12); vmcs12->vm_exit_reason = reason | VMX_EXIT_REASONS_FAILED_VMENTRY; vmcs12->exit_qualification = qualification; kpatch-0.5.0/test/integration/centos-7/multiple.test000077500000000000000000000017051321664017000224710ustar00rootroot00000000000000#!/bin/bash SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" ROOTDIR="$(readlink -f $SCRIPTDIR/../../..)" KPATCH="sudo $ROOTDIR/kpatch/kpatch" set -o errexit die() { echo "ERROR: $@" >&2 exit 1 } ko_to_test() { tmp=${1%.ko}-LOADED.test echo ${tmp#kpatch-} } # make sure any modules added here are disjoint declare -a modules=(kpatch-cmdline-string.ko kpatch-meminfo-string.ko) for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog && die "$SCRIPTDIR/$testprog succeeded before loading any modules" done for mod in "${modules[@]}"; do $KPATCH load $mod done for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog || die "$SCRIPTDIR/$testprog failed after loading modules" done for mod in "${modules[@]}"; do $KPATCH unload $mod done for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog && die "$SCRIPTDIR/$testprog succeeded after unloading modules" done exit 0 kpatch-0.5.0/test/integration/centos-7/new-function.patch000066400000000000000000000014651321664017000233720ustar00rootroot00000000000000diff -Nupr src.orig/drivers/tty/n_tty.c src/drivers/tty/n_tty.c --- src.orig/drivers/tty/n_tty.c 2017-09-22 15:27:21.084053633 -0400 +++ src/drivers/tty/n_tty.c 2017-09-22 15:27:45.888156346 -0400 @@ -2016,7 +2016,7 @@ do_it_again: * lock themselves) */ -static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, +static ssize_t noinline kpatch_n_tty_write(struct tty_struct *tty, struct file *file, const unsigned char *buf, size_t nr) { const unsigned char *b = buf; @@ -2098,6 +2098,12 @@ break_out: return (b - buf) ? b - buf : retval; } +static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, + const unsigned char *buf, size_t nr) +{ + return kpatch_n_tty_write(tty, file, buf, nr); +} + /** * n_tty_poll - poll method for N_TTY * @tty: terminal device kpatch-0.5.0/test/integration/centos-7/new-globals.patch000066400000000000000000000017551321664017000231720ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2017-09-22 15:27:21.698056175 -0400 +++ src/fs/proc/cmdline.c 2017-09-22 15:27:47.028161067 -0400 @@ -27,3 +27,10 @@ static int __init proc_cmdline_init(void return 0; } module_init(proc_cmdline_init); + +#include +void kpatch_print_message(void) +{ + if (!jiffies) + printk("hello there!\n"); +} diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-09-22 15:27:21.699056179 -0400 +++ src/fs/proc/meminfo.c 2017-09-22 15:27:47.029161071 -0400 @@ -16,6 +16,8 @@ #include #include "internal.h" +void kpatch_print_message(void); + void __attribute__((weak)) arch_report_meminfo(struct seq_file *m) { } @@ -53,6 +55,7 @@ static int meminfo_proc_show(struct seq_ /* * Tagged format, for easy grepping and expansion. */ + kpatch_print_message(); seq_printf(m, "MemTotal: %8lu kB\n" "MemFree: %8lu kB\n" kpatch-0.5.0/test/integration/centos-7/parainstructions-section.patch000066400000000000000000000006551321664017000260300ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/generic.c src/fs/proc/generic.c --- src.orig/fs/proc/generic.c 2017-09-22 15:27:21.698056175 -0400 +++ src/fs/proc/generic.c 2017-09-22 15:27:48.190165879 -0400 @@ -194,6 +194,7 @@ int proc_alloc_inum(unsigned int *inum) unsigned int i; int error; + printk("kpatch-test: testing change to .parainstructions section\n"); retry: if (!ida_pre_get(&proc_inum_ida, GFP_KERNEL)) return -ENOMEM; kpatch-0.5.0/test/integration/centos-7/remote-setup000077500000000000000000000033431321664017000223110ustar00rootroot00000000000000#!/bin/bash -x # install rpms on a Fedora 22 system to prepare it for kpatch integration tests set -o errexit [[ $UID != 0 ]] && sudo=sudo warn() { echo "ERROR: $1" >&2 } die() { warn "$@" exit 1 } install_rpms() { # crude workaround for a weird dnf bug where it fails to download $sudo dnf install -y $* || $sudo dnf install -y $* } install_rpms gcc elfutils elfutils-devel rpmdevtools pesign openssl numactl-devel wget patchutils $sudo dnf builddep -y kernel || $sudo dnf builddep -y kernel # install kernel debuginfo and devel RPMs for target kernel kverrel=$(uname -r) kverrel=${kverrel%.x86_64} kver=${kverrel%%-*} krel=${kverrel#*-} install_rpms https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-debuginfo-$kver-$krel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-debuginfo-common-x86_64-$kver-$krel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-devel-$kver-$krel.x86_64.rpm # install version of gcc which was used to build the target kernel gccver=$(gcc --version |head -n1 |cut -d' ' -f3-) kgccver=$(readelf -p .comment /usr/lib/debug/lib/modules/$(uname -r)/vmlinux |grep GCC: | tr -s ' ' | cut -d ' ' -f6-) if [[ $gccver != $kgccver ]]; then gver=$(echo $kgccver | awk '{print $1}') grel=$(echo $kgccver | sed 's/.*-\(.*\))/\1/') grel=$grel.$(rpm -q gcc |sed 's/.*\.\(.*\)\.x86_64/\1/') install_rpms https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/cpp-$gver-$grel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/gcc-$gver-$grel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/libgomp-$gver-$grel.x86_64.rpm fi install_rpms ccache ccache -M 5G kpatch-0.5.0/test/integration/centos-7/replace-section-references.patch000066400000000000000000000007461321664017000261530ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2017-09-22 15:27:20.852052672 -0400 +++ src/arch/x86/kvm/x86.c 2017-09-22 15:27:49.362170732 -0400 @@ -248,6 +248,8 @@ static void shared_msr_update(unsigned s void kvm_define_shared_msr(unsigned slot, u32 msr) { + if (!jiffies) + printk("kpatch kvm define shared msr\n"); BUG_ON(slot >= KVM_NR_SHARED_MSRS); shared_msrs_global.msrs[slot] = msr; if (slot >= shared_msrs_global.nr) kpatch-0.5.0/test/integration/centos-7/shadow-newpid-LOADED.test000077500000000000000000000000551321664017000243320ustar00rootroot00000000000000#!/bin/bash grep -q newpid: /proc/$$/status kpatch-0.5.0/test/integration/centos-7/shadow-newpid.patch000066400000000000000000000041361321664017000235250ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/array.c src/fs/proc/array.c --- src.orig/fs/proc/array.c 2017-09-22 16:52:10.597110096 -0400 +++ src/fs/proc/array.c 2017-09-22 16:59:40.799972178 -0400 @@ -359,13 +359,20 @@ static inline void task_seccomp(struct s #endif } +#include "kpatch.h" static inline void task_context_switch_counts(struct seq_file *m, struct task_struct *p) { + int *newpid; + seq_printf(m, "voluntary_ctxt_switches:\t%lu\n" "nonvoluntary_ctxt_switches:\t%lu\n", p->nvcsw, p->nivcsw); + + newpid = kpatch_shadow_get(p, "newpid"); + if (newpid) + seq_printf(m, "newpid:\t%d\n", *newpid); } static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) diff -Nupr src.orig/kernel/exit.c src/kernel/exit.c --- src.orig/kernel/exit.c 2017-09-22 16:52:10.506109720 -0400 +++ src/kernel/exit.c 2017-09-22 16:59:40.799972178 -0400 @@ -715,6 +715,7 @@ static void check_stack_usage(void) static inline void check_stack_usage(void) {} #endif +#include "kpatch.h" void do_exit(long code) { struct task_struct *tsk = current; @@ -812,6 +813,8 @@ void do_exit(long code) check_stack_usage(); exit_thread(); + kpatch_shadow_free(tsk, "newpid"); + /* * Flush inherited counters to the parent - before the parent * gets woken up by child-exit notifications. diff -Nupr src.orig/kernel/fork.c src/kernel/fork.c --- src.orig/kernel/fork.c 2017-09-22 16:52:10.504109711 -0400 +++ src/kernel/fork.c 2017-09-22 17:00:44.938237460 -0400 @@ -1700,6 +1700,7 @@ struct task_struct *fork_idle(int cpu) * It copies the process, and if successful kick-starts * it and waits for it to finish using the VM if required. */ +#include "kpatch.h" long do_fork(unsigned long clone_flags, unsigned long stack_start, unsigned long stack_size, @@ -1737,6 +1738,13 @@ long do_fork(unsigned long clone_flags, if (!IS_ERR(p)) { struct completion vfork; struct pid *pid; + int *newpid; + static int ctr = 0; + + newpid = kpatch_shadow_alloc(p, "newpid", sizeof(*newpid), + GFP_KERNEL); + if (newpid) + *newpid = ctr++; trace_sched_process_fork(current, p); kpatch-0.5.0/test/integration/centos-7/smp-locks-section.patch000066400000000000000000000011061321664017000243200ustar00rootroot00000000000000diff -Nupr src.orig/drivers/tty/tty_buffer.c src/drivers/tty/tty_buffer.c --- src.orig/drivers/tty/tty_buffer.c 2017-09-22 15:27:21.077053604 -0400 +++ src/drivers/tty/tty_buffer.c 2017-09-22 15:27:50.542175618 -0400 @@ -217,6 +217,10 @@ int tty_buffer_request_room(struct tty_p /* OPTIMISATION: We could keep a per tty "zero" sized buffer to remove this conditional if its worth it. This would be invisible to the callers */ + + if (!size) + printk("kpatch-test: testing .smp_locks section changes\n"); + b = buf->tail; if (b != NULL) left = b->size - b->used; kpatch-0.5.0/test/integration/centos-7/special-static-2.patch000066400000000000000000000012251321664017000240140ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2017-09-22 15:27:20.852052672 -0400 +++ src/arch/x86/kvm/x86.c 2017-09-22 15:27:51.744180596 -0400 @@ -2093,12 +2093,20 @@ static void record_steal_time(struct kvm &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)); } +void kpatch_kvm_x86_foo(void) +{ + if (!jiffies) + printk("kpatch kvm x86 foo\n"); +} + int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { bool pr = false; u32 msr = msr_info->index; u64 data = msr_info->data; + kpatch_kvm_x86_foo(); + switch (msr) { case MSR_AMD64_NB_CFG: case MSR_IA32_UCODE_REV: kpatch-0.5.0/test/integration/centos-7/special-static.patch000066400000000000000000000010351321664017000236540ustar00rootroot00000000000000diff -Nupr src.orig/kernel/fork.c src/kernel/fork.c --- src.orig/kernel/fork.c 2017-09-22 15:27:21.600055769 -0400 +++ src/kernel/fork.c 2017-09-22 15:27:53.052186012 -0400 @@ -1129,10 +1129,18 @@ static void posix_cpu_timers_init_group( INIT_LIST_HEAD(&sig->cpu_timers[2]); } +void kpatch_foo(void) +{ + if (!jiffies) + printk("kpatch copy signal\n"); +} + static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) { struct signal_struct *sig; + kpatch_foo(); + if (clone_flags & CLONE_THREAD) return 0; kpatch-0.5.0/test/integration/centos-7/tracepoints-section.patch000066400000000000000000000007141321664017000247470ustar00rootroot00000000000000diff -Nupr src.orig/kernel/timer.c src/kernel/timer.c --- src.orig/kernel/timer.c 2017-09-22 15:27:21.600055769 -0400 +++ src/kernel/timer.c 2017-09-22 15:27:54.288191131 -0400 @@ -1390,6 +1390,9 @@ static void run_timer_softirq(struct sof { struct tvec_base *base = __this_cpu_read(tvec_bases); + if (!base) + printk("kpatch-test: testing __tracepoints section changes\n"); + if (time_after_eq(jiffies, base->timer_jiffies)) __run_timers(base); } kpatch-0.5.0/test/integration/centos-7/warn-detect-FAIL.patch000066400000000000000000000004151321664017000236760ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2017-09-22 15:27:20.852052672 -0400 +++ src/arch/x86/kvm/x86.c 2017-09-22 15:27:55.489196104 -0400 @@ -1,3 +1,4 @@ + /* * Kernel-based Virtual Machine driver for Linux * kpatch-0.5.0/test/integration/fedora-25/000077500000000000000000000000001321664017000177545ustar00rootroot00000000000000kpatch-0.5.0/test/integration/fedora-25/README000066400000000000000000000000261321664017000206320ustar00rootroot000000000000004.9.7-201.fc25.x86_64 kpatch-0.5.0/test/integration/fedora-25/bug-table-section.patch000066400000000000000000000007711321664017000243060ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/proc_sysctl.c src/fs/proc/proc_sysctl.c --- src.orig/fs/proc/proc_sysctl.c 2016-11-30 19:39:49.316737234 +0000 +++ src/fs/proc/proc_sysctl.c 2016-11-30 19:39:49.441737234 +0000 @@ -301,6 +301,8 @@ void sysctl_head_put(struct ctl_table_he static struct ctl_table_header *sysctl_head_grab(struct ctl_table_header *head) { + if (jiffies == 0) + printk("kpatch-test: testing __bug_table section changes\n"); BUG_ON(!head); spin_lock(&sysctl_lock); if (!use_table(head)) kpatch-0.5.0/test/integration/fedora-25/cmdline-string-LOADED.test000077500000000000000000000000511321664017000245210ustar00rootroot00000000000000#!/bin/bash grep kpatch=1 /proc/cmdline kpatch-0.5.0/test/integration/fedora-25/cmdline-string.patch000066400000000000000000000006011321664017000237110ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2016-11-30 19:39:49.317737234 +0000 +++ src/fs/proc/cmdline.c 2016-11-30 19:39:52.696737234 +0000 @@ -5,7 +5,7 @@ static int cmdline_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "%s\n", saved_command_line); + seq_printf(m, "%s kpatch=1\n", saved_command_line); return 0; } kpatch-0.5.0/test/integration/fedora-25/convert-global-local.patch000066400000000000000000000010161321664017000250010ustar00rootroot00000000000000This is a test for #658: a kernel panic seen when patching an exported function (e.g., kmalloc) which is used by patch_init(). --- diff -Nupr src.orig/mm/slub.c src/mm/slub.c --- src.orig/mm/slub.c 2016-12-11 14:17:54.000000000 -0500 +++ src/mm/slub.c 2017-02-08 21:02:17.946870598 -0500 @@ -3719,6 +3719,9 @@ void *__kmalloc(size_t size, gfp_t flags struct kmem_cache *s; void *ret; + if (!jiffies) + printk("kpatch kmalloc\n"); + if (unlikely(size > KMALLOC_MAX_CACHE_SIZE)) return kmalloc_large(size, flags); kpatch-0.5.0/test/integration/fedora-25/data-new-LOADED.test000077500000000000000000000000541321664017000233050ustar00rootroot00000000000000#!/bin/bash grep "kpatch: 5" /proc/meminfo kpatch-0.5.0/test/integration/fedora-25/data-new.patch000066400000000000000000000011401321664017000224710ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-02-08 21:06:25.943876606 -0500 +++ src/fs/proc/meminfo.c 2017-02-08 21:08:07.154879058 -0500 @@ -42,6 +42,8 @@ static void show_val_kb(struct seq_file seq_write(m, " kB\n", 4); } +static int foo = 5; + static int meminfo_proc_show(struct seq_file *m, void *v) { struct sysinfo i; @@ -153,6 +155,7 @@ static int meminfo_proc_show(struct seq_ show_val_kb(m, "CmaFree: ", global_page_state(NR_FREE_CMA_PAGES)); #endif + seq_printf(m, "kpatch: %d\n", foo); hugetlb_report_meminfo(m); kpatch-0.5.0/test/integration/fedora-25/data-read-mostly.patch000066400000000000000000000004471321664017000241510ustar00rootroot00000000000000diff -Nupr src.orig/net/core/dev.c src/net/core/dev.c --- src.orig/net/core/dev.c 2016-11-30 19:39:45.232737234 +0000 +++ src/net/core/dev.c 2016-11-30 19:40:02.077737234 +0000 @@ -4179,6 +4179,7 @@ ncls: case RX_HANDLER_PASS: break; default: + printk("BUG!\n"); BUG(); } } kpatch-0.5.0/test/integration/fedora-25/fixup-section.patch000066400000000000000000000007331321664017000235750ustar00rootroot00000000000000diff -Nupr src.orig/fs/readdir.c src/fs/readdir.c --- src.orig/fs/readdir.c 2016-11-30 19:39:49.237737234 +0000 +++ src/fs/readdir.c 2016-11-30 19:40:05.186737234 +0000 @@ -188,6 +188,8 @@ static int filldir(struct dir_context *c goto efault; } dirent = buf->current_dir; + if (dirent->d_ino == 12345678) + printk("kpatch-test: testing .fixup section changes\n"); if (__put_user(d_ino, &dirent->d_ino)) goto efault; if (__put_user(reclen, &dirent->d_reclen)) kpatch-0.5.0/test/integration/fedora-25/gcc-constprop.patch000066400000000000000000000010151321664017000235530ustar00rootroot00000000000000ensure timekeeping_forward_now.constprop.8 and timekeeping_forward_now.constprop.9 are correlated. diff -Nupr src.orig/kernel/time/timekeeping.c src/kernel/time/timekeeping.c --- src.orig/kernel/time/timekeeping.c 2016-11-30 19:39:45.151737234 +0000 +++ src/kernel/time/timekeeping.c 2016-11-30 19:40:08.035737234 +0000 @@ -1150,6 +1150,9 @@ void do_gettimeofday(struct timeval *tv) { struct timespec64 now; + if (!tv) + return; + getnstimeofday64(&now); tv->tv_sec = now.tv_sec; tv->tv_usec = now.tv_nsec/1000; kpatch-0.5.0/test/integration/fedora-25/gcc-isra.patch000066400000000000000000000006541321664017000224720ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/proc_sysctl.c src/fs/proc/proc_sysctl.c --- src.orig/fs/proc/proc_sysctl.c 2016-11-30 19:39:49.316737234 +0000 +++ src/fs/proc/proc_sysctl.c 2016-11-30 19:40:10.918737234 +0000 @@ -46,6 +46,7 @@ void proc_sys_poll_notify(struct ctl_tab if (!poll) return; + printk("kpatch-test: testing gcc .isra function name mangling\n"); atomic_inc(&poll->event); wake_up_interruptible(&poll->wait); } kpatch-0.5.0/test/integration/fedora-25/gcc-mangled-3.patch000066400000000000000000000010021321664017000232670ustar00rootroot00000000000000ensure that __cmpxchg_double_slab.isra.45 and __cmpxchg_double_slab.isra.45.part.46 aren't correlated. diff -Nupr src.orig/mm/slub.c src/mm/slub.c --- src.orig/mm/slub.c 2016-11-30 19:39:45.200737234 +0000 +++ src/mm/slub.c 2016-11-30 19:40:13.997737234 +0000 @@ -5758,6 +5758,9 @@ void get_slabinfo(struct kmem_cache *s, int node; struct kmem_cache_node *n; + if (!jiffies) + printk("slabinfo\n"); + for_each_kmem_cache_node(s, node, n) { nr_slabs += node_nr_slabs(n); nr_objs += node_nr_objs(n); kpatch-0.5.0/test/integration/fedora-25/gcc-static-local-var-2.patch000066400000000000000000000014021321664017000250300ustar00rootroot00000000000000diff -Nupr src.orig/mm/mmap.c src/mm/mmap.c --- src.orig/mm/mmap.c 2017-02-08 20:48:33.821850633 -0500 +++ src/mm/mmap.c 2017-02-08 20:48:56.682851187 -0500 @@ -1582,6 +1582,7 @@ static inline int accountable_mapping(st return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE; } +#include "kpatch-macros.h" unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff) { @@ -1591,6 +1592,9 @@ unsigned long mmap_region(struct file *f struct rb_node **rb_link, *rb_parent; unsigned long charged = 0; + if (!jiffies) + printk("kpatch mmap foo\n"); + /* Check against address space limit. */ if (!may_expand_vm(mm, vm_flags, len >> PAGE_SHIFT)) { unsigned long nr_pages; kpatch-0.5.0/test/integration/fedora-25/gcc-static-local-var-3.patch000066400000000000000000000006651321664017000250430ustar00rootroot00000000000000diff -Nupr src.orig/kernel/reboot.c src/kernel/reboot.c --- src.orig/kernel/reboot.c 2016-11-30 19:39:45.165737234 +0000 +++ src/kernel/reboot.c 2016-11-30 19:40:19.850737234 +0000 @@ -366,8 +366,15 @@ SYSCALL_DEFINE4(reboot, int, magic1, int return ret; } +void kpatch_bar(void) +{ + if (!jiffies) + printk("kpatch_foo\n"); +} + static void deferred_cad(struct work_struct *dummy) { + kpatch_bar(); kernel_restart(NULL); } kpatch-0.5.0/test/integration/fedora-25/gcc-static-local-var-4.patch000066400000000000000000000010561321664017000250370ustar00rootroot00000000000000diff -Nupr src.orig/fs/aio.c src/fs/aio.c --- src.orig/fs/aio.c.orig 2017-02-08 21:10:29.963882517 -0500 +++ src/fs/aio.c 2017-02-08 21:10:51.501883039 -0500 @@ -271,10 +271,17 @@ static int __init aio_setup(void) } __initcall(aio_setup); +void kpatch_aio_foo(void) +{ + if (!jiffies) + printk("kpatch aio foo\n"); +} + static void put_aio_ring_file(struct kioctx *ctx) { struct file *aio_ring_file = ctx->aio_ring_file; struct address_space *i_mapping; + kpatch_aio_foo(); if (aio_ring_file) { truncate_setsize(aio_ring_file->f_inode, 0); kpatch-0.5.0/test/integration/fedora-25/gcc-static-local-var-4.test000077500000000000000000000001521321664017000247160ustar00rootroot00000000000000#!/bin/bash if $(nm kpatch-gcc-static-local-var-4.ko | grep -q free_ioctx); then exit 1 else exit 0 fi kpatch-0.5.0/test/integration/fedora-25/gcc-static-local-var-5.patch000066400000000000000000000021571321664017000250430ustar00rootroot00000000000000diff -Nupr src.orig/kernel/audit.c src/kernel/audit.c --- src.orig/kernel/audit.c 2016-11-30 19:39:45.165737234 +0000 +++ src/kernel/audit.c 2016-11-30 19:40:25.802737234 +0000 @@ -211,6 +211,12 @@ void audit_panic(const char *message) } } +void kpatch_audit_foo(void) +{ + if (!jiffies) + printk("kpatch audit foo\n"); +} + static inline int audit_rate_check(void) { static unsigned long last_check = 0; @@ -221,6 +227,7 @@ static inline int audit_rate_check(void) unsigned long elapsed; int retval = 0; + kpatch_audit_foo(); if (!audit_rate_limit) return 1; spin_lock_irqsave(&lock, flags); @@ -240,6 +247,11 @@ static inline int audit_rate_check(void) return retval; } +noinline void kpatch_audit_check(void) +{ + audit_rate_check(); +} + /** * audit_log_lost - conditionally log lost audit message event * @message: the message stating reason for lost audit message @@ -286,6 +298,8 @@ static int audit_log_config_change(char struct audit_buffer *ab; int rc = 0; + kpatch_audit_check(); + ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_CONFIG_CHANGE); if (unlikely(!ab)) return rc; kpatch-0.5.0/test/integration/fedora-25/gcc-static-local-var.patch000066400000000000000000000012121321664017000246700ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kernel/ldt.c src/arch/x86/kernel/ldt.c --- src.orig/arch/x86/kernel/ldt.c 2016-11-30 19:39:46.579737234 +0000 +++ src/arch/x86/kernel/ldt.c 2016-11-30 19:40:28.658737234 +0000 @@ -99,6 +99,12 @@ static void free_ldt_struct(struct ldt_s kfree(ldt); } +void hi_there(void) +{ + if (!jiffies) + printk("hi there\n"); +} + /* * we do not have to muck with descriptors here, that is * done in switch_mm() as needed. @@ -109,6 +115,8 @@ int init_new_context_ldt(struct task_str struct mm_struct *old_mm; int retval = 0; + hi_there(); + mutex_init(&mm->context.lock); old_mm = current->mm; if (!old_mm) { kpatch-0.5.0/test/integration/fedora-25/macro-hooks-LOADED.test000077500000000000000000000000731321664017000240300ustar00rootroot00000000000000#!/bin/bash [[ $(cat /proc/sys/fs/aio-max-nr) = 262144 ]] kpatch-0.5.0/test/integration/fedora-25/macro-hooks.patch000066400000000000000000000013111321664017000232130ustar00rootroot00000000000000diff -Nupr src.orig/fs/aio.c src/fs/aio.c --- src.orig/fs/aio.c 2016-11-30 19:39:49.237737234 +0000 +++ src/fs/aio.c 2016-11-30 19:40:31.570737234 +0000 @@ -1719,6 +1719,20 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t return ret; } +static int aio_max_nr_orig; +void kpatch_load_aio_max_nr(void) +{ + aio_max_nr_orig = aio_max_nr; + aio_max_nr = 0x40000; +} +void kpatch_unload_aio_max_nr(void) +{ + aio_max_nr = aio_max_nr_orig; +} +#include "kpatch-macros.h" +KPATCH_LOAD_HOOK(kpatch_load_aio_max_nr); +KPATCH_UNLOAD_HOOK(kpatch_unload_aio_max_nr); + /* io_getevents: * Attempts to read at least min_nr events and up to nr events from * the completion queue for the aio_context specified by ctx_id. If kpatch-0.5.0/test/integration/fedora-25/macro-printk.patch000066400000000000000000000111231321664017000234010ustar00rootroot00000000000000diff -Nupr src.orig/net/ipv4/fib_frontend.c src/net/ipv4/fib_frontend.c --- src.orig/net/ipv4/fib_frontend.c 2017-02-08 21:47:41.895936587 -0500 +++ src/net/ipv4/fib_frontend.c 2017-02-08 21:48:15.908937411 -0500 @@ -721,6 +721,7 @@ errout: return err; } +#include "kpatch-macros.h" static int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh) { struct net *net = sock_net(skb->sk); @@ -739,6 +740,7 @@ static int inet_rtm_newroute(struct sk_b } err = fib_table_insert(net, tb, &cfg); + KPATCH_PRINTK("[inet_rtm_newroute]: err is %d\n", err); errout: return err; } diff -Nupr src.orig/net/ipv4/fib_semantics.c src/net/ipv4/fib_semantics.c --- src.orig/net/ipv4/fib_semantics.c 2017-02-08 21:49:22.766939031 -0500 +++ src/net/ipv4/fib_semantics.c 2017-02-08 21:53:08.628944503 -0500 @@ -991,6 +991,7 @@ fib_convert_metrics(struct fib_info *fi, return 0; } +#include "kpatch-macros.h" struct fib_info *fib_create_info(struct fib_config *cfg) { int err; @@ -1018,6 +1019,7 @@ struct fib_info *fib_create_info(struct #endif err = -ENOBUFS; + KPATCH_PRINTK("[fib_create_info]: create error err is %d\n",err); if (fib_info_cnt >= fib_info_hash_size) { unsigned int new_size = fib_info_hash_size << 1; struct hlist_head *new_info_hash; @@ -1038,6 +1040,7 @@ struct fib_info *fib_create_info(struct if (!fib_info_hash_size) goto failure; } + KPATCH_PRINTK("[fib_create_info]: 2 create error err is %d\n",err); fi = kzalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL); if (!fi) @@ -1049,6 +1052,7 @@ struct fib_info *fib_create_info(struct goto failure; } else fi->fib_metrics = (u32 *) dst_default_metrics; + KPATCH_PRINTK("[fib_create_info]: 3 create error err is %d\n",err); fi->fib_net = net; fi->fib_protocol = cfg->fc_protocol; @@ -1066,6 +1070,7 @@ struct fib_info *fib_create_info(struct if (!nexthop_nh->nh_pcpu_rth_output) goto failure; } endfor_nexthops(fi) + KPATCH_PRINTK("[fib_create_info]: 4 create error err is %d\n",err); err = fib_convert_metrics(fi, cfg); if (err) @@ -1119,6 +1124,9 @@ struct fib_info *fib_create_info(struct #endif } + KPATCH_PRINTK("[fib_create_info]: 5 create error err is %d\n",err); + KPATCH_PRINTK("[fib_create_info]: 6 create error err is %d\n",err); + if (fib_props[cfg->fc_type].error) { if (cfg->fc_gw || cfg->fc_oif || cfg->fc_mp) goto err_inval; @@ -1135,6 +1143,7 @@ struct fib_info *fib_create_info(struct goto err_inval; } } + KPATCH_PRINTK("[fib_create_info]: 7 create error err is %d\n",err); if (cfg->fc_scope > RT_SCOPE_HOST) goto err_inval; @@ -1163,6 +1172,7 @@ struct fib_info *fib_create_info(struct if (linkdown == fi->fib_nhs) fi->fib_flags |= RTNH_F_LINKDOWN; } + KPATCH_PRINTK("[fib_create_info]: 8 create error err is %d\n",err); if (fi->fib_prefsrc && !fib_valid_prefsrc(cfg, fi->fib_prefsrc)) goto err_inval; @@ -1171,6 +1181,7 @@ struct fib_info *fib_create_info(struct fib_info_update_nh_saddr(net, nexthop_nh); fib_add_weight(fi, nexthop_nh); } endfor_nexthops(fi) + KPATCH_PRINTK("[fib_create_info]: 9 create error err is %d\n",err); fib_rebalance(fi); @@ -1182,6 +1193,7 @@ link_it: ofi->fib_treeref++; return ofi; } + KPATCH_PRINTK("[fib_create_info]: 10 create error err is %d\n",err); fi->fib_treeref++; atomic_inc(&fi->fib_clntref); @@ -1205,6 +1217,7 @@ link_it: hlist_add_head(&nexthop_nh->nh_hash, head); } endfor_nexthops(fi) spin_unlock_bh(&fib_info_lock); + KPATCH_PRINTK("[fib_create_info]: 11 create error err is %d\n",err); return fi; err_inval: @@ -1215,6 +1228,7 @@ failure: fi->fib_dead = 1; free_fib_info(fi); } + KPATCH_PRINTK("[fib_create_info]: 12 create error err is %d\n",err); return ERR_PTR(err); } diff -Nupr src.orig/net/ipv4/fib_trie.c src/net/ipv4/fib_trie.c --- src.orig/net/ipv4/fib_trie.c 2017-02-08 21:53:18.182944734 -0500 +++ src/net/ipv4/fib_trie.c 2017-02-09 16:43:09.835587031 -0500 @@ -1106,6 +1106,7 @@ static int fib_insert_alias(struct trie } /* Caller must hold RTNL. */ +#include "kpatch-macros.h" int fib_table_insert(struct net *net, struct fib_table *tb, struct fib_config *cfg) { @@ -1130,11 +1131,14 @@ int fib_table_insert(struct net *net, st if ((plen < KEYLENGTH) && (key << plen)) return -EINVAL; + KPATCH_PRINTK("[fib_table_insert]: start\n"); fi = fib_create_info(cfg); if (IS_ERR(fi)) { err = PTR_ERR(fi); + KPATCH_PRINTK("[fib_table_insert]: create error err is %d\n",err); goto err; } + KPATCH_PRINTK("[fib_table_insert]: cross\n"); l = fib_find_node(t, &tp, key); fa = l ? fib_find_alias(&l->leaf, slen, tos, fi->fib_priority, kpatch-0.5.0/test/integration/fedora-25/meminfo-cmdline-rebuild-SLOW-LOADED.test000077500000000000000000000001141321664017000270530ustar00rootroot00000000000000#!/bin/bash grep VMALLOCCHUNK /proc/meminfo && grep kpatch=1 /proc/cmdline kpatch-0.5.0/test/integration/fedora-25/meminfo-cmdline-rebuild-SLOW.patch000066400000000000000000000023601321664017000262470ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2017-02-08 21:31:22.297912856 -0500 +++ src/fs/proc/cmdline.c 2017-02-08 21:39:53.633925243 -0500 @@ -5,7 +5,7 @@ static int cmdline_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "%s\n", saved_command_line); + seq_printf(m, "%s kpatch=1\n", saved_command_line); return 0; } diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-02-08 21:06:25.943876606 -0500 +++ src/fs/proc/meminfo.c 2017-02-08 21:40:25.550926017 -0500 @@ -132,7 +132,7 @@ static int meminfo_proc_show(struct seq_ seq_printf(m, "VmallocTotal: %8lu kB\n", (unsigned long)VMALLOC_TOTAL >> 10); show_val_kb(m, "VmallocUsed: ", 0ul); - show_val_kb(m, "VmallocChunk: ", 0ul); + show_val_kb(m, "VMALLOCCHUNK: ", 0ul); #ifdef CONFIG_MEMORY_FAILURE seq_printf(m, "HardwareCorrupted: %5lu kB\n", diff -Nupr src.orig/include/linux/kernel.h src/include/linux/kernel.h --- src.orig/include/linux/kernel.h 2017-02-08 21:42:09.228928528 -0500 +++ src/include/linux/kernel.h 2017-02-08 21:42:10.994928571 -0500 @@ -2,6 +2,7 @@ #define _LINUX_KERNEL_H + #include #include #include kpatch-0.5.0/test/integration/fedora-25/meminfo-init-FAIL.patch000066400000000000000000000006011321664017000240760ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2016-11-30 19:39:49.317737234 +0000 +++ src/fs/proc/meminfo.c 2016-11-30 19:40:43.833737234 +0000 @@ -196,6 +196,7 @@ static const struct file_operations memi static int __init proc_meminfo_init(void) { + printk("a\n"); proc_create("meminfo", 0, NULL, &meminfo_proc_fops); return 0; } kpatch-0.5.0/test/integration/fedora-25/meminfo-init2-FAIL.patch000066400000000000000000000011441321664017000241630ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-02-08 21:06:25.943876606 -0500 +++ src/fs/proc/meminfo.c 2017-02-08 21:37:44.992922127 -0500 @@ -51,6 +51,8 @@ static int meminfo_proc_show(struct seq_ unsigned long pages[NR_LRU_LISTS]; int lru; + printk("a\n"); + si_meminfo(&i); si_swapinfo(&i); committed = percpu_counter_read_positive(&vm_committed_as); @@ -175,6 +177,7 @@ static const struct file_operations memi static int __init proc_meminfo_init(void) { + printk("a\n"); proc_create("meminfo", 0, NULL, &meminfo_proc_fops); return 0; } kpatch-0.5.0/test/integration/fedora-25/meminfo-string-LOADED.test000077500000000000000000000000551321664017000245440ustar00rootroot00000000000000#!/bin/bash grep VMALLOCCHUNK /proc/meminfo kpatch-0.5.0/test/integration/fedora-25/meminfo-string.patch000066400000000000000000000010371321664017000237340ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-02-08 21:06:25.943876606 -0500 +++ fs/proc/meminfo.c 2017-02-08 21:35:26.574918774 -0500 @@ -132,7 +132,7 @@ static int meminfo_proc_show(struct seq_ seq_printf(m, "VmallocTotal: %8lu kB\n", (unsigned long)VMALLOC_TOTAL >> 10); show_val_kb(m, "VmallocUsed: ", 0ul); - show_val_kb(m, "VmallocChunk: ", 0ul); + show_val_kb(m, "VMALLOCCHUNK: ", 0ul); #ifdef CONFIG_MEMORY_FAILURE seq_printf(m, "HardwareCorrupted: %5lu kB\n", kpatch-0.5.0/test/integration/fedora-25/module-call-external.patch000066400000000000000000000017501321664017000250160ustar00rootroot00000000000000diff -Nupr src.orig/fs/nfsd/export.c src/fs/nfsd/export.c --- src.orig/fs/nfsd/export.c 2016-11-30 19:39:49.284737234 +0000 +++ src/fs/nfsd/export.c 2016-11-30 19:40:50.089737234 +0000 @@ -1193,6 +1193,8 @@ static void exp_flags(struct seq_file *m } } +extern char *kpatch_string(void); + static int e_show(struct seq_file *m, void *p) { struct cache_head *cp = p; @@ -1202,6 +1204,7 @@ static int e_show(struct seq_file *m, vo if (p == SEQ_START_TOKEN) { seq_puts(m, "# Version 1.1\n"); seq_puts(m, "# Path Client(Flags) # IPs\n"); + seq_puts(m, kpatch_string()); return 0; } diff -Nupr src.orig/net/netlink/af_netlink.c src/net/netlink/af_netlink.c --- src.orig/net/netlink/af_netlink.c 2016-11-30 19:39:45.299737234 +0000 +++ src/net/netlink/af_netlink.c 2016-11-30 19:40:50.090737234 +0000 @@ -2619,4 +2619,9 @@ panic: panic("netlink_init: Cannot allocate nl_table\n"); } +char *kpatch_string(void) +{ + return "# kpatch\n"; +} + core_initcall(netlink_proto_init); kpatch-0.5.0/test/integration/fedora-25/module-kvm-fixup.patch000066400000000000000000000006711321664017000242120ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/vmx.c src/arch/x86/kvm/vmx.c --- src.orig/arch/x86/kvm/vmx.c 2016-11-30 19:39:46.591737234 +0000 +++ src/arch/x86/kvm/vmx.c 2016-11-30 19:40:53.182737234 +0000 @@ -10845,6 +10845,8 @@ static int vmx_check_intercept(struct kv struct x86_instruction_info *info, enum x86_intercept_stage stage) { + if (!jiffies) + printk("kpatch vmx_check_intercept\n"); return X86EMUL_CONTINUE; } kpatch-0.5.0/test/integration/fedora-25/module-shadow.patch000066400000000000000000000016361321664017000235530ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/vmx.c src/arch/x86/kvm/vmx.c --- src.orig/arch/x86/kvm/vmx.c 2016-11-30 19:39:46.591737234 +0000 +++ src/arch/x86/kvm/vmx.c 2016-11-30 19:40:56.291737234 +0000 @@ -10829,10 +10829,20 @@ static void vmx_leave_nested(struct kvm_ * It should only be called before L2 actually succeeded to run, and when * vmcs01 is current (it doesn't leave_guest_mode() or switch vmcss). */ +#include "kpatch.h" static void nested_vmx_entry_failure(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, u32 reason, unsigned long qualification) { + int *kpatch; + + kpatch = kpatch_shadow_alloc(vcpu, "kpatch", sizeof(*kpatch), + GFP_KERNEL); + if (kpatch) { + kpatch_shadow_get(vcpu, "kpatch"); + kpatch_shadow_free(vcpu, "kpatch"); + } + load_vmcs12_host_state(vcpu, vmcs12); vmcs12->vm_exit_reason = reason | VMX_EXIT_REASONS_FAILED_VMENTRY; vmcs12->exit_qualification = qualification; kpatch-0.5.0/test/integration/fedora-25/multiple.test000077500000000000000000000017051321664017000225160ustar00rootroot00000000000000#!/bin/bash SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" ROOTDIR="$(readlink -f $SCRIPTDIR/../../..)" KPATCH="sudo $ROOTDIR/kpatch/kpatch" set -o errexit die() { echo "ERROR: $@" >&2 exit 1 } ko_to_test() { tmp=${1%.ko}-LOADED.test echo ${tmp#kpatch-} } # make sure any modules added here are disjoint declare -a modules=(kpatch-cmdline-string.ko kpatch-meminfo-string.ko) for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog && die "$SCRIPTDIR/$testprog succeeded before loading any modules" done for mod in "${modules[@]}"; do $KPATCH load $mod done for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog || die "$SCRIPTDIR/$testprog failed after loading modules" done for mod in "${modules[@]}"; do $KPATCH unload $mod done for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog && die "$SCRIPTDIR/$testprog succeeded after unloading modules" done exit 0 kpatch-0.5.0/test/integration/fedora-25/new-function.patch000066400000000000000000000015211321664017000234100ustar00rootroot00000000000000diff -Nupr src.orig/drivers/tty/n_tty.c src/drivers/tty/n_tty.c --- src.orig/drivers/tty/n_tty.c 2016-11-30 19:39:48.532737234 +0000 +++ src/drivers/tty/n_tty.c 2016-11-30 19:40:59.432737234 +0000 @@ -2269,7 +2269,7 @@ static ssize_t n_tty_read(struct tty_str * lock themselves) */ -static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, +static ssize_t noinline kpatch_n_tty_write(struct tty_struct *tty, struct file *file, const unsigned char *buf, size_t nr) { const unsigned char *b = buf; @@ -2356,6 +2356,12 @@ break_out: return (b - buf) ? b - buf : retval; } +static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, + const unsigned char *buf, size_t nr) +{ + return kpatch_n_tty_write(tty, file, buf, nr); +} + /** * n_tty_poll - poll method for N_TTY * @tty: terminal device kpatch-0.5.0/test/integration/fedora-25/new-globals.patch000066400000000000000000000020311321664017000232030ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2017-02-08 21:31:22.297912856 -0500 +++ src/fs/proc/cmdline.c 2017-02-08 21:32:08.510913975 -0500 @@ -27,3 +27,10 @@ static int __init proc_cmdline_init(void return 0; } fs_initcall(proc_cmdline_init); + +#include +void kpatch_print_message(void) +{ + if (!jiffies) + printk("hello there!\n"); +} diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2017-02-08 21:06:25.943876606 -0500 +++ src/fs/proc/meminfo.c 2017-02-08 21:33:26.498915865 -0500 @@ -19,6 +19,8 @@ #include #include "internal.h" +void kpatch_print_message(void); + void __attribute__((weak)) arch_report_meminfo(struct seq_file *m) { } @@ -65,6 +67,7 @@ static int meminfo_proc_show(struct seq_ available = si_mem_available(); + kpatch_print_message(); show_val_kb(m, "MemTotal: ", i.totalram); show_val_kb(m, "MemFree: ", i.freeram); show_val_kb(m, "MemAvailable: ", available); kpatch-0.5.0/test/integration/fedora-25/parainstructions-section.patch000066400000000000000000000006551321664017000260550ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/generic.c src/fs/proc/generic.c --- src.orig/fs/proc/generic.c 2016-11-30 19:39:49.317737234 +0000 +++ src/fs/proc/generic.c 2016-11-30 19:41:05.659737234 +0000 @@ -192,6 +192,7 @@ int proc_alloc_inum(unsigned int *inum) unsigned int i; int error; + printk("kpatch-test: testing change to .parainstructions section\n"); retry: if (!ida_pre_get(&proc_inum_ida, GFP_KERNEL)) return -ENOMEM; kpatch-0.5.0/test/integration/fedora-25/remote-setup000077500000000000000000000033431321664017000223360ustar00rootroot00000000000000#!/bin/bash -x # install rpms on a Fedora 22 system to prepare it for kpatch integration tests set -o errexit [[ $UID != 0 ]] && sudo=sudo warn() { echo "ERROR: $1" >&2 } die() { warn "$@" exit 1 } install_rpms() { # crude workaround for a weird dnf bug where it fails to download $sudo dnf install -y $* || $sudo dnf install -y $* } install_rpms gcc elfutils elfutils-devel rpmdevtools pesign openssl numactl-devel wget patchutils $sudo dnf builddep -y kernel || $sudo dnf builddep -y kernel # install kernel debuginfo and devel RPMs for target kernel kverrel=$(uname -r) kverrel=${kverrel%.x86_64} kver=${kverrel%%-*} krel=${kverrel#*-} install_rpms https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-debuginfo-$kver-$krel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-debuginfo-common-x86_64-$kver-$krel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-devel-$kver-$krel.x86_64.rpm # install version of gcc which was used to build the target kernel gccver=$(gcc --version |head -n1 |cut -d' ' -f3-) kgccver=$(readelf -p .comment /usr/lib/debug/lib/modules/$(uname -r)/vmlinux |grep GCC: | tr -s ' ' | cut -d ' ' -f6-) if [[ $gccver != $kgccver ]]; then gver=$(echo $kgccver | awk '{print $1}') grel=$(echo $kgccver | sed 's/.*-\(.*\))/\1/') grel=$grel.$(rpm -q gcc |sed 's/.*\.\(.*\)\.x86_64/\1/') install_rpms https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/cpp-$gver-$grel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/gcc-$gver-$grel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/libgomp-$gver-$grel.x86_64.rpm fi install_rpms ccache ccache -M 5G kpatch-0.5.0/test/integration/fedora-25/replace-section-references.patch000066400000000000000000000007461321664017000262000ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2017-02-08 20:48:33.312850621 -0500 +++ src/arch/x86/kvm/x86.c 2017-02-08 20:49:15.030851631 -0500 @@ -250,6 +250,8 @@ static void shared_msr_update(unsigned s void kvm_define_shared_msr(unsigned slot, u32 msr) { + if (!jiffies) + printk("kpatch kvm define shared msr\n"); BUG_ON(slot >= KVM_NR_SHARED_MSRS); shared_msrs_global.msrs[slot] = msr; if (slot >= shared_msrs_global.nr) kpatch-0.5.0/test/integration/fedora-25/shadow-newpid-LOADED.test000077500000000000000000000000551321664017000243570ustar00rootroot00000000000000#!/bin/bash grep -q newpid: /proc/$$/status kpatch-0.5.0/test/integration/fedora-25/shadow-newpid.patch000066400000000000000000000042361321664017000235530ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/array.c src/fs/proc/array.c --- src.orig/fs/proc/array.c 2017-02-08 21:17:58.244893377 -0500 +++ src/fs/proc/array.c 2017-02-08 21:26:21.670905573 -0500 @@ -348,12 +348,19 @@ static inline void task_seccomp(struct s #endif } +#include "kpatch.h" static inline void task_context_switch_counts(struct seq_file *m, struct task_struct *p) { + int *newpid; + seq_put_decimal_ull(m, "voluntary_ctxt_switches:\t", p->nvcsw); seq_put_decimal_ull(m, "\nnonvoluntary_ctxt_switches:\t", p->nivcsw); seq_putc(m, '\n'); + + newpid = kpatch_shadow_get(p, "newpid"); + if (newpid) + seq_printf(m, "newpid:\t%d\n", *newpid); } static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) diff -Nupr src.orig/kernel/exit.c src/kernel/exit.c --- src.orig/kernel/exit.c 2017-02-08 21:26:31.119905802 -0500 +++ src/kernel/exit.c 2017-02-08 21:27:11.347906776 -0500 @@ -725,6 +725,7 @@ static void check_stack_usage(void) static inline void check_stack_usage(void) {} #endif +#include "kpatch.h" void __noreturn do_exit(long code) { struct task_struct *tsk = current; @@ -828,6 +829,8 @@ void __noreturn do_exit(long code) exit_task_work(tsk); exit_thread(tsk); + kpatch_shadow_free(tsk, "newpid"); + /* * Flush inherited counters to the parent - before the parent * gets woken up by child-exit notifications. diff -Nupr src.orig/kernel/fork.c src/kernel/fork.c --- src.orig/kernel/fork.c 2017-02-08 21:27:34.629907340 -0500 +++ src/kernel/fork.c 2017-02-08 21:28:31.182908710 -0500 @@ -1904,6 +1904,7 @@ struct task_struct *fork_idle(int cpu) * It copies the process, and if successful kick-starts * it and waits for it to finish using the VM if required. */ +#include "kpatch.h" long _do_fork(unsigned long clone_flags, unsigned long stack_start, unsigned long stack_size, @@ -1943,6 +1944,13 @@ long _do_fork(unsigned long clone_flags, if (!IS_ERR(p)) { struct completion vfork; struct pid *pid; + int *newpid; + static int ctr = 0; + + newpid = kpatch_shadow_alloc(p, "newpid", sizeof(*newpid), + GFP_KERNEL); + if (newpid) + *newpid = ctr++; trace_sched_process_fork(current, p); kpatch-0.5.0/test/integration/fedora-25/smp-locks-section.patch000066400000000000000000000007451321664017000243550ustar00rootroot00000000000000diff -Nupr src.orig/drivers/tty/tty_buffer.c src/drivers/tty/tty_buffer.c --- src.orig/drivers/tty/tty_buffer.c 2016-11-30 19:39:48.532737234 +0000 +++ src/drivers/tty/tty_buffer.c 2016-11-30 19:41:15.067737234 +0000 @@ -255,6 +255,8 @@ static int __tty_buffer_request_room(str struct tty_buffer *b, *n; int left, change; + if (!size) + printk("kpatch-test: testing .smp_locks section changes\n"); b = buf->tail; if (b->flags & TTYB_NORMAL) left = 2 * b->size - b->used; kpatch-0.5.0/test/integration/fedora-25/special-static-2.patch000066400000000000000000000012251321664017000240410ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2016-11-30 19:39:46.590737234 +0000 +++ src/arch/x86/kvm/x86.c 2016-11-30 19:41:18.201737234 +0000 @@ -2039,12 +2039,20 @@ static void record_steal_time(struct kvm &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)); } +void kpatch_kvm_x86_foo(void) +{ + if (!jiffies) + printk("kpatch kvm x86 foo\n"); +} + int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { bool pr = false; u32 msr = msr_info->index; u64 data = msr_info->data; + kpatch_kvm_x86_foo(); + switch (msr) { case MSR_AMD64_NB_CFG: case MSR_IA32_UCODE_REV: kpatch-0.5.0/test/integration/fedora-25/special-static.patch000066400000000000000000000010351321664017000237010ustar00rootroot00000000000000diff -Nupr src.orig/kernel/fork.c src/kernel/fork.c --- src.orig/kernel/fork.c 2016-11-30 19:39:45.165737234 +0000 +++ src/kernel/fork.c 2016-11-30 19:41:21.329737234 +0000 @@ -1169,10 +1169,18 @@ static void posix_cpu_timers_init_group( INIT_LIST_HEAD(&sig->cpu_timers[2]); } +void kpatch_foo(void) +{ + if (!jiffies) + printk("kpatch copy signal\n"); +} + static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) { struct signal_struct *sig; + kpatch_foo(); + if (clone_flags & CLONE_THREAD) return 0; kpatch-0.5.0/test/integration/fedora-25/tracepoints-section.patch000066400000000000000000000011671321664017000247770ustar00rootroot00000000000000ensure __jump_table is parsed and we can tell that it effectively didn't change diff -Nupr src.orig/kernel/time/timer.c src/kernel/time/timer.c --- src.orig/kernel/time/timer.c 2016-11-30 19:39:45.150737234 +0000 +++ src/kernel/time/timer.c 2016-11-30 20:02:08.254737234 +0000 @@ -1637,6 +1637,9 @@ static void run_timer_softirq(struct sof { struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]); + if (!base) + printk("kpatch-test: testing __tracepoints section changes\n"); + __run_timers(base); if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && base->nohz_active) __run_timers(this_cpu_ptr(&timer_bases[BASE_DEF])); kpatch-0.5.0/test/integration/fedora-25/warn-detect-FAIL.patch000066400000000000000000000004151321664017000237230ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2016-11-30 19:39:46.590737234 +0000 +++ src/arch/x86/kvm/x86.c 2016-11-30 19:41:24.482737234 +0000 @@ -1,3 +1,4 @@ + /* * Kernel-based Virtual Machine driver for Linux * kpatch-0.5.0/test/integration/kpatch-test000077500000000000000000000167321321664017000204560ustar00rootroot00000000000000#!/bin/bash # # kpatch integration test framework # # Copyright (C) 2014 Josh Poimboeuf # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, # 02110-1301, USA. # # # This is a basic integration test framework for kpatch, which tests building, # loading, and unloading patches, as well as any other related custom tests. # # This script looks for test input files in the current directory. It expects # certain file naming conventions: # # - foo.patch: patch that should build successfully # # - foo-SLOW.patch: patch that should be skipped in the quick test # # - bar-FAIL.patch: patch that should fail to build # # - foo-LOADED.test: executable which tests whether the foo.patch module is # loaded. It will be used to test that loading/unloading the patch module # works as expected. # # Any other *.test files will be executed after all the patch modules have been # built from the *.patch files. They can be used for more custom tests above # and beyond the simple loading and unloading tests. shopt -s nullglob SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" ROOTDIR="$(readlink -f $SCRIPTDIR/../..)" # TODO: option to use system-installed binaries instead KPATCH="sudo $ROOTDIR/kpatch/kpatch" RMMOD="sudo rmmod" unset CCACHE_HASHDIR KPATCHBUILD="$ROOTDIR"/kpatch-build/kpatch-build ERROR=0 LOG=test.log rm -f $LOG PATCHDIR="${PATCHDIR:-$PWD}" declare -a PATCH_LIST declare -a TEST_LIST usage() { echo "usage: $0 [options] [patch1 ... patchN]" >&2 echo " patchN Pathnames of patches to test" >&2 echo " -h, --help Show this help message" >&2 echo " -c, --cached Don't rebuild patch modules" >&2 echo " -d, --directory Patch directory" >&2 echo " -q, --quick Just combine all patches into one module for testing" >&2 } options=$(getopt -o hcd:q -l "help,cached,directory,quick" -- "$@") || exit 1 eval set -- "$options" while [[ $# -gt 0 ]]; do case "$1" in -h|--help) usage exit 0 ;; -c|--cached) SKIPBUILD=1 ;; -d|--directory) PATCHDIR="$2" shift ;; -q|--quick) QUICK=1 ;; *) [[ "$1" = "--" ]] && shift && continue PATCH_LIST+=("$1") ;; esac shift done if [[ ${#PATCH_LIST[@]} = 0 ]]; then PATCH_LIST=($PATCHDIR/*.patch) TEST_LIST=($PATCHDIR/*.test) if [[ ${#PATCH_LIST[@]} = 0 ]]; then echo "No patches found!" exit 1 fi else for file in "${PATCH_LIST[@]}"; do prefix=${file%%.patch} [[ -e "$prefix-FAIL.test" ]] && TEST_LIST+=("$prefix-FAIL.test") [[ -e "$prefix-LOADED.test" ]] && TEST_LIST+=("$prefix-LOADED.test") [[ -e "$prefix-SLOW.test" ]] && TEST_LIST+=("$prefix-SLOW.test") done fi error() { echo "ERROR: $@" |tee -a $LOG >&2 ERROR=$((ERROR + 1)) } log() { echo "$@" |tee -a $LOG } unload_all() { for i in `/sbin/lsmod |egrep '^kpatch' |awk '{print $1}'`; do if [[ $i != kpatch ]]; then $KPATCH unload $i >> $LOG 2>&1 || error "\"kpatch unload $i\" failed" fi done if /sbin/lsmod |egrep -q '^kpatch'; then $RMMOD kpatch >> $LOG 2>&1 || error "\"rmmod kpatch\" failed" fi } build_module() { file=$1 prefix=$(basename ${file%%.patch}) module=kpatch-$prefix.ko if [[ $prefix =~ -FAIL ]]; then shouldfail=1 else shouldfail=0 fi if [[ $SKIPBUILD -eq 1 ]]; then skip=0 [[ $shouldfail -eq 1 ]] && skip=1 [[ -e $module ]] && skip=1 [[ $skip -eq 1 ]] && log "skipping build: $prefix" && return fi log "build: $prefix" if ! $KPATCHBUILD $file >> $LOG 2>&1; then [[ $shouldfail -eq 0 ]] && error "$prefix: build failed" else [[ $shouldfail -eq 1 ]] && error "$prefix: build succeeded when it should have failed" fi } run_load_test() { file=$1 prefix=$(basename ${file%%.patch}) module=kpatch-$prefix.ko testprog="$(dirname $1)/$prefix-LOADED.test" [[ $prefix =~ -FAIL ]] && return if [[ ! -e $module ]]; then log "can't find $module, skipping" return fi if [[ -e $testprog ]]; then log "load test: $prefix" else log "load test: $prefix (no test prog)" fi if [[ -e $testprog ]] && $testprog >> $LOG 2>&1; then error "$prefix: $testprog succeeded before kpatch load" return fi if ! $KPATCH load $module >> $LOG 2>&1; then error "$prefix: kpatch load failed" return fi if [[ -e $testprog ]] && ! $testprog >> $LOG 2>&1; then error "$prefix: $testprog failed after kpatch load" fi if ! $KPATCH unload $module >> $LOG 2>&1; then error "$prefix: kpatch unload failed" return fi if [[ -e $testprog ]] && $testprog >> $LOG 2>&1; then error "$prefix: $testprog succeeded after kpatch unload" return fi } run_custom_test() { testprog=$1 prefix=$(basename ${file%%.test}) [[ $testprog = *-LOADED.test ]] && return log "custom test: $prefix" if ! $testprog >> $LOG 2>&1; then error "$prefix: test failed" fi } build_combined_module() { if [[ $SKIPBUILD -eq 1 ]] && [[ -e kpatch-COMBINED.ko ]]; then log "skipping build: combined" return fi declare -a COMBINED_LIST for file in "${PATCH_LIST[@]}"; do [[ $file =~ -FAIL ]] && log "combine: skipping $file" && continue [[ $file =~ -SLOW ]] && log "combine: skipping $file" && continue COMBINED_LIST+=($file) done if [[ ${#COMBINED_LIST[@]} -le 1 ]]; then log "skipping build: combined (only ${#PATCH_LIST[@]} patch(es))" return fi log "build: combined module" if ! $KPATCHBUILD -n kpatch-COMBINED "${COMBINED_LIST[@]}" >> $LOG 2>&1; then error "combined build failed" fi } run_combined_test() { if [[ ! -e kpatch-COMBINED.ko ]]; then log "can't find kpatch-COMBINED.ko, skipping" return fi log "load test: combined module" unload_all for testprog in "${TEST_LIST[@]}"; do [[ $testprog != *-LOADED.test ]] && continue if $testprog >> $LOG 2>&1; then error "combined: $testprog succeeded before kpatch load" return fi done if ! $KPATCH load kpatch-COMBINED.ko >> $LOG 2>&1; then error "combined: kpatch load failed" return fi for testprog in "${TEST_LIST[@]}"; do [[ $testprog != *-LOADED.test ]] && continue if ! $testprog >> $LOG 2>&1; then error "combined: $testprog failed after kpatch load" fi done if ! $KPATCH unload kpatch-COMBINED.ko >> $LOG 2>&1; then error "combined: kpatch unload failed" return fi for testprog in "${TEST_LIST[@]}"; do [[ $testprog != *-LOADED.test ]] && continue if $testprog >> $LOG 2>&1; then error "combined: $testprog succeeded after kpatch unload" return fi done } echo "clearing printk buffer" sudo dmesg -C if [[ $QUICK != 1 ]]; then for file in "${PATCH_LIST[@]}"; do build_module $file done fi build_combined_module unload_all if [[ $QUICK != 1 ]]; then for file in "${PATCH_LIST[@]}"; do run_load_test $file done fi run_combined_test if [[ $QUICK != 1 ]]; then for testprog in "${TEST_LIST[@]}"; do unload_all run_custom_test $testprog done fi unload_all dmesg |grep -q "Call Trace" && error "kernel error detected in printk buffer" if [[ $ERROR -gt 0 ]]; then log "$ERROR errors encountered" echo "see test.log for more information" else log "SUCCESS" fi exit $ERROR kpatch-0.5.0/test/integration/rebase-patches000077500000000000000000000020741321664017000211070ustar00rootroot00000000000000#!/bin/bash # # rebase a set of patches, assumes the kernel has already been downloaded into # the kpatch $CACHEDIR.Output patches go into ./${ID}-${VERSION_ID}/ # # Example: # # ./rebase-patches old_dir/*.patch CACHEDIR="${CACHEDIR:-$HOME/.kpatch}" SRCDIR="$CACHEDIR/src" source /etc/os-release OUTDIR=$(pwd)/${ID}-${VERSION_ID} mkdir -p $OUTDIR echo "* Making backup copy of kernel sources" rm -rf ${SRCDIR}.orig cp -r $SRCDIR ${SRCDIR}.orig for P in $@; do echo echo "* Patch: $(basename $P)" echo "** dry run..." patch -d $CACHEDIR --dry-run --quiet -p0 < $P [[ $? -ne 0 ]] && echo "*** Skipping! ***" && continue echo "** patching..." patch -d $CACHEDIR -p0 --no-backup-if-mismatch < $P echo "** generating new $(basename $P)..." NEWP=$OUTDIR/$(basename $P) awk '/^diff|^patch/{exit} {print $LF}' $P > $NEWP cd $CACHEDIR diff -Nupr src.orig src >> $NEWP cd - echo "** reversing patch to restore tree..." patch -d $CACHEDIR -p0 -R < $NEWP done echo "*** Removing backup copy of kernel sources" rm -rf ${SRCDIR}.orig echo echo "*** Done" kpatch-0.5.0/test/integration/ubuntu-16.04/000077500000000000000000000000001321664017000202605ustar00rootroot00000000000000kpatch-0.5.0/test/integration/ubuntu-16.04/README000066400000000000000000000000211321664017000211310ustar00rootroot000000000000004.4.0-53-generic kpatch-0.5.0/test/integration/ubuntu-16.04/bug-table-section.patch000066400000000000000000000007711321664017000246120ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/proc_sysctl.c src/fs/proc/proc_sysctl.c --- src.orig/fs/proc/proc_sysctl.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/proc_sysctl.c 2016-12-15 19:56:00.204000000 +0000 @@ -301,6 +301,8 @@ void sysctl_head_put(struct ctl_table_he static struct ctl_table_header *sysctl_head_grab(struct ctl_table_header *head) { + if (jiffies == 0) + printk("kpatch-test: testing __bug_table section changes\n"); BUG_ON(!head); spin_lock(&sysctl_lock); if (!use_table(head)) kpatch-0.5.0/test/integration/ubuntu-16.04/cmdline-string-LOADED.test000077500000000000000000000000511321664017000250250ustar00rootroot00000000000000#!/bin/bash grep kpatch=1 /proc/cmdline kpatch-0.5.0/test/integration/ubuntu-16.04/cmdline-string.patch000066400000000000000000000006011321664017000242150ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/cmdline.c 2016-12-15 19:56:12.848000000 +0000 @@ -5,7 +5,7 @@ static int cmdline_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "%s\n", saved_command_line); + seq_printf(m, "%s kpatch=1\n", saved_command_line); return 0; } kpatch-0.5.0/test/integration/ubuntu-16.04/data-new-LOADED.test000077500000000000000000000000541321664017000236110ustar00rootroot00000000000000#!/bin/bash grep "kpatch: 5" /proc/meminfo kpatch-0.5.0/test/integration/ubuntu-16.04/data-new.patch000066400000000000000000000013321321664017000230000ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/meminfo.c 2016-12-15 19:56:17.076000000 +0000 @@ -23,6 +23,8 @@ void __attribute__((weak)) arch_report_m { } +static int foo = 5; + static int meminfo_proc_show(struct seq_file *m, void *v) { struct sysinfo i; @@ -110,6 +112,7 @@ static int meminfo_proc_show(struct seq_ "CmaTotal: %8lu kB\n" "CmaFree: %8lu kB\n" #endif + "kpatch: %d" , K(i.totalram), K(i.freeram), @@ -169,6 +172,7 @@ static int meminfo_proc_show(struct seq_ , K(totalcma_pages) , K(global_page_state(NR_FREE_CMA_PAGES)) #endif + ,foo ); hugetlb_report_meminfo(m); kpatch-0.5.0/test/integration/ubuntu-16.04/data-read-mostly.patch000066400000000000000000000004471321664017000244550ustar00rootroot00000000000000diff -Nupr src.orig/net/core/dev.c src/net/core/dev.c --- src.orig/net/core/dev.c 2016-12-15 19:55:39.848000000 +0000 +++ src/net/core/dev.c 2016-12-15 19:56:21.344000000 +0000 @@ -3926,6 +3926,7 @@ ncls: case RX_HANDLER_PASS: break; default: + printk("BUG!\n"); BUG(); } } kpatch-0.5.0/test/integration/ubuntu-16.04/fixup-section.patch000066400000000000000000000007331321664017000241010ustar00rootroot00000000000000diff -Nupr src.orig/fs/readdir.c src/fs/readdir.c --- src.orig/fs/readdir.c 2016-12-15 19:55:39.196000000 +0000 +++ src/fs/readdir.c 2016-12-15 19:56:25.868000000 +0000 @@ -173,6 +173,8 @@ static int filldir(struct dir_context *c goto efault; } dirent = buf->current_dir; + if (dirent->d_ino == 12345678) + printk("kpatch-test: testing .fixup section changes\n"); if (__put_user(d_ino, &dirent->d_ino)) goto efault; if (__put_user(reclen, &dirent->d_reclen)) kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-constprop.patch000066400000000000000000000010151321664017000240570ustar00rootroot00000000000000ensure timekeeping_forward_now.constprop.8 and timekeeping_forward_now.constprop.9 are correlated. diff -Nupr src.orig/kernel/time/timekeeping.c src/kernel/time/timekeeping.c --- src.orig/kernel/time/timekeeping.c 2016-12-15 19:56:00.136000000 +0000 +++ src/kernel/time/timekeeping.c 2016-12-15 19:56:30.496000000 +0000 @@ -1148,6 +1148,9 @@ void do_gettimeofday(struct timeval *tv) { struct timespec64 now; + if (!tv) + return; + getnstimeofday64(&now); tv->tv_sec = now.tv_sec; tv->tv_usec = now.tv_nsec/1000; kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-isra.patch000066400000000000000000000006541321664017000227760ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/proc_sysctl.c src/fs/proc/proc_sysctl.c --- src.orig/fs/proc/proc_sysctl.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/proc_sysctl.c 2016-12-15 19:56:34.800000000 +0000 @@ -46,6 +46,7 @@ void proc_sys_poll_notify(struct ctl_tab if (!poll) return; + printk("kpatch-test: testing gcc .isra function name mangling\n"); atomic_inc(&poll->event); wake_up_interruptible(&poll->wait); } kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-mangled-3.patch000066400000000000000000000010021321664017000235730ustar00rootroot00000000000000ensure that __cmpxchg_double_slab.isra.45 and __cmpxchg_double_slab.isra.45.part.46 aren't correlated. diff -Nupr src.orig/mm/slub.c src/mm/slub.c --- src.orig/mm/slub.c 2016-12-15 19:55:38.988000000 +0000 +++ src/mm/slub.c 2016-12-15 19:56:39.068000000 +0000 @@ -5531,6 +5531,9 @@ void get_slabinfo(struct kmem_cache *s, int node; struct kmem_cache_node *n; + if (!jiffies) + printk("slabinfo\n"); + for_each_kmem_cache_node(s, node, n) { nr_slabs += node_nr_slabs(n); nr_objs += node_nr_objs(n); kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-static-local-var-2.patch000066400000000000000000000013701321664017000253400ustar00rootroot00000000000000diff -Nupr src.orig/mm/mmap.c src/mm/mmap.c --- src.orig/mm/mmap.c 2016-12-15 19:55:38.992000000 +0000 +++ src/mm/mmap.c 2016-12-15 19:56:43.684000000 +0000 @@ -1538,6 +1538,7 @@ static inline int accountable_mapping(st return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE; } +#include "kpatch-macros.h" unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff) { @@ -1547,6 +1548,9 @@ unsigned long mmap_region(struct file *f struct rb_node **rb_link, *rb_parent; unsigned long charged = 0; + if (!jiffies) + printk("kpatch mmap foo\n"); + /* Check against address space limit. */ if (!may_expand_vm(mm, len >> PAGE_SHIFT)) { unsigned long nr_pages; kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-static-local-var-3.patch000066400000000000000000000006651321664017000253470ustar00rootroot00000000000000diff -Nupr src.orig/kernel/reboot.c src/kernel/reboot.c --- src.orig/kernel/reboot.c 2016-12-15 19:56:00.196000000 +0000 +++ src/kernel/reboot.c 2016-12-15 19:56:48.264000000 +0000 @@ -366,8 +366,15 @@ SYSCALL_DEFINE4(reboot, int, magic1, int return ret; } +void kpatch_bar(void) +{ + if (!jiffies) + printk("kpatch_foo\n"); +} + static void deferred_cad(struct work_struct *dummy) { + kpatch_bar(); kernel_restart(NULL); } kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-static-local-var-4.patch000066400000000000000000000010051321664017000253350ustar00rootroot00000000000000diff -Nupr src.orig/fs/aio.c src/fs/aio.c --- src.orig/fs/aio.c 2016-12-15 19:55:38.992000000 +0000 +++ src/fs/aio.c 2016-12-15 19:56:52.588000000 +0000 @@ -271,9 +271,16 @@ static int __init aio_setup(void) } __initcall(aio_setup); +void kpatch_aio_foo(void) +{ + if (!jiffies) + printk("kpatch aio foo\n"); +} + static void put_aio_ring_file(struct kioctx *ctx) { struct file *aio_ring_file = ctx->aio_ring_file; + kpatch_aio_foo(); if (aio_ring_file) { truncate_setsize(aio_ring_file->f_inode, 0); kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-static-local-var-4.test000077500000000000000000000001521321664017000252220ustar00rootroot00000000000000#!/bin/bash if $(nm kpatch-gcc-static-local-var-4.ko | grep -q free_ioctx); then exit 1 else exit 0 fi kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-static-local-var-5.patch000066400000000000000000000021571321664017000253470ustar00rootroot00000000000000diff -Nupr src.orig/kernel/audit.c src/kernel/audit.c --- src.orig/kernel/audit.c 2016-12-15 19:56:00.196000000 +0000 +++ src/kernel/audit.c 2016-12-15 19:56:56.868000000 +0000 @@ -213,6 +213,12 @@ void audit_panic(const char *message) } } +void kpatch_audit_foo(void) +{ + if (!jiffies) + printk("kpatch audit foo\n"); +} + static inline int audit_rate_check(void) { static unsigned long last_check = 0; @@ -223,6 +229,7 @@ static inline int audit_rate_check(void) unsigned long elapsed; int retval = 0; + kpatch_audit_foo(); if (!audit_rate_limit) return 1; spin_lock_irqsave(&lock, flags); @@ -242,6 +249,11 @@ static inline int audit_rate_check(void) return retval; } +noinline void kpatch_audit_check(void) +{ + audit_rate_check(); +} + /** * audit_log_lost - conditionally log lost audit message event * @message: the message stating reason for lost audit message @@ -288,6 +300,8 @@ static int audit_log_config_change(char struct audit_buffer *ab; int rc = 0; + kpatch_audit_check(); + ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_CONFIG_CHANGE); if (unlikely(!ab)) return rc; kpatch-0.5.0/test/integration/ubuntu-16.04/gcc-static-local-var.patch000066400000000000000000000012111321664017000251730ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kernel/ldt.c src/arch/x86/kernel/ldt.c --- src.orig/arch/x86/kernel/ldt.c 2016-12-15 19:55:57.560000000 +0000 +++ src/arch/x86/kernel/ldt.c 2016-12-15 19:57:01.124000000 +0000 @@ -99,6 +99,12 @@ static void free_ldt_struct(struct ldt_s kfree(ldt); } +void hi_there(void) +{ + if (!jiffies) + printk("hi there\n"); +} + /* * we do not have to muck with descriptors here, that is * done in switch_mm() as needed. @@ -109,6 +115,8 @@ int init_new_context(struct task_struct struct mm_struct *old_mm; int retval = 0; + hi_there(); + mutex_init(&mm->context.lock); old_mm = current->mm; if (!old_mm) { kpatch-0.5.0/test/integration/ubuntu-16.04/macro-hooks-LOADED.test000077500000000000000000000000731321664017000243340ustar00rootroot00000000000000#!/bin/bash [[ $(cat /proc/sys/fs/aio-max-nr) = 262144 ]] kpatch-0.5.0/test/integration/ubuntu-16.04/macro-hooks.patch000066400000000000000000000013111321664017000235170ustar00rootroot00000000000000diff -Nupr src.orig/fs/aio.c src/fs/aio.c --- src.orig/fs/aio.c 2016-12-15 19:55:38.992000000 +0000 +++ src/fs/aio.c 2016-12-15 19:57:05.396000000 +0000 @@ -1716,6 +1716,20 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t return ret; } +static int aio_max_nr_orig; +void kpatch_load_aio_max_nr(void) +{ + aio_max_nr_orig = aio_max_nr; + aio_max_nr = 0x40000; +} +void kpatch_unload_aio_max_nr(void) +{ + aio_max_nr = aio_max_nr_orig; +} +#include "kpatch-macros.h" +KPATCH_LOAD_HOOK(kpatch_load_aio_max_nr); +KPATCH_UNLOAD_HOOK(kpatch_unload_aio_max_nr); + /* io_getevents: * Attempts to read at least min_nr events and up to nr events from * the completion queue for the aio_context specified by ctx_id. If kpatch-0.5.0/test/integration/ubuntu-16.04/macro-printk.patch000066400000000000000000000111441321664017000237100ustar00rootroot00000000000000diff -Nupr src.orig/net/ipv4/fib_frontend.c src/net/ipv4/fib_frontend.c --- src.orig/net/ipv4/fib_frontend.c 2016-12-15 19:55:39.724000000 +0000 +++ src/net/ipv4/fib_frontend.c 2016-12-15 19:57:09.672000000 +0000 @@ -728,6 +728,7 @@ errout: return err; } +#include "kpatch-macros.h" static int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh) { struct net *net = sock_net(skb->sk); @@ -746,6 +747,7 @@ static int inet_rtm_newroute(struct sk_b } err = fib_table_insert(tb, &cfg); + KPATCH_PRINTK("[inet_rtm_newroute]: err is %d\n", err); errout: return err; } diff -Nupr src.orig/net/ipv4/fib_semantics.c src/net/ipv4/fib_semantics.c --- src.orig/net/ipv4/fib_semantics.c 2016-12-15 19:55:39.720000000 +0000 +++ src/net/ipv4/fib_semantics.c 2016-12-15 19:57:09.672000000 +0000 @@ -991,6 +991,7 @@ fib_convert_metrics(struct fib_info *fi, return 0; } +#include "kpatch-macros.h" struct fib_info *fib_create_info(struct fib_config *cfg) { int err; @@ -1018,6 +1019,7 @@ struct fib_info *fib_create_info(struct #endif err = -ENOBUFS; + KPATCH_PRINTK("[fib_create_info]: create error err is %d\n",err); if (fib_info_cnt >= fib_info_hash_size) { unsigned int new_size = fib_info_hash_size << 1; struct hlist_head *new_info_hash; @@ -1038,6 +1040,7 @@ struct fib_info *fib_create_info(struct if (!fib_info_hash_size) goto failure; } + KPATCH_PRINTK("[fib_create_info]: 2 create error err is %d\n",err); fi = kzalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL); if (!fi) @@ -1049,6 +1052,7 @@ struct fib_info *fib_create_info(struct goto failure; } else fi->fib_metrics = (u32 *) dst_default_metrics; + KPATCH_PRINTK("[fib_create_info]: 3 create error err is %d\n",err); fi->fib_net = net; fi->fib_protocol = cfg->fc_protocol; @@ -1065,6 +1069,7 @@ struct fib_info *fib_create_info(struct if (!nexthop_nh->nh_pcpu_rth_output) goto failure; } endfor_nexthops(fi) + KPATCH_PRINTK("[fib_create_info]: 4 create error err is %d\n",err); err = fib_convert_metrics(fi, cfg); if (err) @@ -1117,6 +1122,8 @@ struct fib_info *fib_create_info(struct nh->nh_weight = 1; #endif } + KPATCH_PRINTK("[fib_create_info]: 5 create error err is %d\n",err); + KPATCH_PRINTK("[fib_create_info]: 6 create error err is %d\n",err); if (fib_props[cfg->fc_type].error) { if (cfg->fc_gw || cfg->fc_oif || cfg->fc_mp) @@ -1134,6 +1141,7 @@ struct fib_info *fib_create_info(struct goto err_inval; } } + KPATCH_PRINTK("[fib_create_info]: 7 create error err is %d\n",err); if (cfg->fc_scope > RT_SCOPE_HOST) goto err_inval; @@ -1162,6 +1170,7 @@ struct fib_info *fib_create_info(struct if (linkdown == fi->fib_nhs) fi->fib_flags |= RTNH_F_LINKDOWN; } + KPATCH_PRINTK("[fib_create_info]: 8 create error err is %d\n",err); if (fi->fib_prefsrc && !fib_valid_prefsrc(cfg, fi->fib_prefsrc)) goto err_inval; @@ -1170,6 +1179,7 @@ struct fib_info *fib_create_info(struct fib_info_update_nh_saddr(net, nexthop_nh); fib_add_weight(fi, nexthop_nh); } endfor_nexthops(fi) + KPATCH_PRINTK("[fib_create_info]: 9 create error err is %d\n",err); fib_rebalance(fi); @@ -1181,6 +1191,7 @@ link_it: ofi->fib_treeref++; return ofi; } + KPATCH_PRINTK("[fib_create_info]: 10 create error err is %d\n",err); fi->fib_treeref++; atomic_inc(&fi->fib_clntref); @@ -1204,6 +1215,7 @@ link_it: hlist_add_head(&nexthop_nh->nh_hash, head); } endfor_nexthops(fi) spin_unlock_bh(&fib_info_lock); + KPATCH_PRINTK("[fib_create_info]: 11 create error err is %d\n",err); return fi; err_inval: @@ -1214,6 +1226,7 @@ failure: fi->fib_dead = 1; free_fib_info(fi); } + KPATCH_PRINTK("[fib_create_info]: 12 create error err is %d\n",err); return ERR_PTR(err); } diff -Nupr src.orig/net/ipv4/fib_trie.c src/net/ipv4/fib_trie.c --- src.orig/net/ipv4/fib_trie.c 2016-12-15 19:55:39.720000000 +0000 +++ src/net/ipv4/fib_trie.c 2016-12-15 19:57:09.676000000 +0000 @@ -1078,6 +1078,7 @@ static int fib_insert_alias(struct trie } /* Caller must hold RTNL. */ +#include "kpatch-macros.h" int fib_table_insert(struct fib_table *tb, struct fib_config *cfg) { struct trie *t = (struct trie *)tb->tb_data; @@ -1101,11 +1102,14 @@ int fib_table_insert(struct fib_table *t if ((plen < KEYLENGTH) && (key << plen)) return -EINVAL; + KPATCH_PRINTK("[fib_table_insert]: start\n"); fi = fib_create_info(cfg); if (IS_ERR(fi)) { err = PTR_ERR(fi); + KPATCH_PRINTK("[fib_table_insert]: create error err is %d\n",err); goto err; } + KPATCH_PRINTK("[fib_table_insert]: cross\n"); l = fib_find_node(t, &tp, key); fa = l ? fib_find_alias(&l->leaf, slen, tos, fi->fib_priority, kpatch-0.5.0/test/integration/ubuntu-16.04/meminfo-cmdline-rebuild-SLOW-LOADED.test000077500000000000000000000001141321664017000273570ustar00rootroot00000000000000#!/bin/bash grep VMALLOCCHUNK /proc/meminfo && grep kpatch=1 /proc/cmdline kpatch-0.5.0/test/integration/ubuntu-16.04/meminfo-cmdline-rebuild-SLOW.patch000066400000000000000000000022501321664017000265510ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/cmdline.c 2016-12-15 19:57:13.988000000 +0000 @@ -5,7 +5,7 @@ static int cmdline_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "%s\n", saved_command_line); + seq_printf(m, "%s kpatch=1\n", saved_command_line); return 0; } diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/meminfo.c 2016-12-15 19:57:13.988000000 +0000 @@ -99,7 +99,7 @@ static int meminfo_proc_show(struct seq_ "Committed_AS: %8lu kB\n" "VmallocTotal: %8lu kB\n" "VmallocUsed: %8lu kB\n" - "VmallocChunk: %8lu kB\n" + "VMALLOCCHUNK: %8lu kB\n" #ifdef CONFIG_MEMORY_FAILURE "HardwareCorrupted: %5lu kB\n" #endif diff -Nupr src.orig/include/linux/kernel.h src/include/linux/kernel.h --- src.orig/include/linux/kernel.h 2016-12-15 19:55:56.996000000 +0000 +++ src/include/linux/kernel.h 2016-12-15 19:57:13.992000000 +0000 @@ -2,6 +2,7 @@ #define _LINUX_KERNEL_H + #include #include #include kpatch-0.5.0/test/integration/ubuntu-16.04/meminfo-init-FAIL.patch000066400000000000000000000006011321664017000244020ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/meminfo.c 2016-12-15 19:57:22.564000000 +0000 @@ -193,6 +193,7 @@ static const struct file_operations memi static int __init proc_meminfo_init(void) { + printk("a\n"); proc_create("meminfo", 0, NULL, &meminfo_proc_fops); return 0; } kpatch-0.5.0/test/integration/ubuntu-16.04/meminfo-init2-FAIL.patch000066400000000000000000000010421321664017000244640ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/meminfo.c 2016-12-15 19:57:18.240000000 +0000 @@ -32,6 +32,7 @@ static int meminfo_proc_show(struct seq_ unsigned long pages[NR_LRU_LISTS]; int lru; + printk("a\n"); /* * display in kilobytes. */ @@ -193,6 +194,7 @@ static const struct file_operations memi static int __init proc_meminfo_init(void) { + printk("a\n"); proc_create("meminfo", 0, NULL, &meminfo_proc_fops); return 0; } kpatch-0.5.0/test/integration/ubuntu-16.04/meminfo-string-LOADED.test000077500000000000000000000000551321664017000250500ustar00rootroot00000000000000#!/bin/bash grep VMALLOCCHUNK /proc/meminfo kpatch-0.5.0/test/integration/ubuntu-16.04/meminfo-string.patch000066400000000000000000000007331321664017000242420ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/meminfo.c 2016-12-15 19:57:26.828000000 +0000 @@ -99,7 +99,7 @@ static int meminfo_proc_show(struct seq_ "Committed_AS: %8lu kB\n" "VmallocTotal: %8lu kB\n" "VmallocUsed: %8lu kB\n" - "VmallocChunk: %8lu kB\n" + "VMALLOCCHUNK: %8lu kB\n" #ifdef CONFIG_MEMORY_FAILURE "HardwareCorrupted: %5lu kB\n" #endif kpatch-0.5.0/test/integration/ubuntu-16.04/module-call-external.patch000066400000000000000000000017501321664017000253220ustar00rootroot00000000000000diff -Nupr src.orig/fs/nfsd/export.c src/fs/nfsd/export.c --- src.orig/fs/nfsd/export.c 2016-12-15 19:55:39.012000000 +0000 +++ src/fs/nfsd/export.c 2016-12-15 19:57:31.068000000 +0000 @@ -1183,6 +1183,8 @@ static void exp_flags(struct seq_file *m } } +extern char *kpatch_string(void); + static int e_show(struct seq_file *m, void *p) { struct cache_head *cp = p; @@ -1192,6 +1194,7 @@ static int e_show(struct seq_file *m, vo if (p == SEQ_START_TOKEN) { seq_puts(m, "# Version 1.1\n"); seq_puts(m, "# Path Client(Flags) # IPs\n"); + seq_puts(m, kpatch_string()); return 0; } diff -Nupr src.orig/net/netlink/af_netlink.c src/net/netlink/af_netlink.c --- src.orig/net/netlink/af_netlink.c 2016-12-15 19:55:39.772000000 +0000 +++ src/net/netlink/af_netlink.c 2016-12-15 19:57:31.072000000 +0000 @@ -3353,4 +3353,9 @@ panic: panic("netlink_init: Cannot allocate nl_table\n"); } +char *kpatch_string(void) +{ + return "# kpatch\n"; +} + core_initcall(netlink_proto_init); kpatch-0.5.0/test/integration/ubuntu-16.04/module-kvm-fixup.patch000066400000000000000000000006711321664017000245160ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/vmx.c src/arch/x86/kvm/vmx.c --- src.orig/arch/x86/kvm/vmx.c 2016-12-15 19:55:57.436000000 +0000 +++ src/arch/x86/kvm/vmx.c 2016-12-15 19:57:35.344000000 +0000 @@ -10574,6 +10574,8 @@ static int vmx_check_intercept(struct kv struct x86_instruction_info *info, enum x86_intercept_stage stage) { + if (!jiffies) + printk("kpatch vmx_check_intercept\n"); return X86EMUL_CONTINUE; } kpatch-0.5.0/test/integration/ubuntu-16.04/module-shadow.patch000066400000000000000000000016361321664017000240570ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/vmx.c src/arch/x86/kvm/vmx.c --- src.orig/arch/x86/kvm/vmx.c 2016-12-15 19:55:57.436000000 +0000 +++ src/arch/x86/kvm/vmx.c 2016-12-15 19:57:39.592000000 +0000 @@ -10558,10 +10558,20 @@ static void vmx_leave_nested(struct kvm_ * It should only be called before L2 actually succeeded to run, and when * vmcs01 is current (it doesn't leave_guest_mode() or switch vmcss). */ +#include "kpatch.h" static void nested_vmx_entry_failure(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, u32 reason, unsigned long qualification) { + int *kpatch; + + kpatch = kpatch_shadow_alloc(vcpu, "kpatch", sizeof(*kpatch), + GFP_KERNEL); + if (kpatch) { + kpatch_shadow_get(vcpu, "kpatch"); + kpatch_shadow_free(vcpu, "kpatch"); + } + load_vmcs12_host_state(vcpu, vmcs12); vmcs12->vm_exit_reason = reason | VMX_EXIT_REASONS_FAILED_VMENTRY; vmcs12->exit_qualification = qualification; kpatch-0.5.0/test/integration/ubuntu-16.04/multiple.test000077500000000000000000000017051321664017000230220ustar00rootroot00000000000000#!/bin/bash SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" ROOTDIR="$(readlink -f $SCRIPTDIR/../../..)" KPATCH="sudo $ROOTDIR/kpatch/kpatch" set -o errexit die() { echo "ERROR: $@" >&2 exit 1 } ko_to_test() { tmp=${1%.ko}-LOADED.test echo ${tmp#kpatch-} } # make sure any modules added here are disjoint declare -a modules=(kpatch-cmdline-string.ko kpatch-meminfo-string.ko) for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog && die "$SCRIPTDIR/$testprog succeeded before loading any modules" done for mod in "${modules[@]}"; do $KPATCH load $mod done for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog || die "$SCRIPTDIR/$testprog failed after loading modules" done for mod in "${modules[@]}"; do $KPATCH unload $mod done for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) $SCRIPTDIR/$testprog && die "$SCRIPTDIR/$testprog succeeded after unloading modules" done exit 0 kpatch-0.5.0/test/integration/ubuntu-16.04/new-function.patch000066400000000000000000000015211321664017000237140ustar00rootroot00000000000000diff -Nupr src.orig/drivers/tty/n_tty.c src/drivers/tty/n_tty.c --- src.orig/drivers/tty/n_tty.c 2016-12-15 19:55:54.840000000 +0000 +++ src/drivers/tty/n_tty.c 2016-12-15 19:57:43.856000000 +0000 @@ -2328,7 +2328,7 @@ static ssize_t n_tty_read(struct tty_str * lock themselves) */ -static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, +static ssize_t noinline kpatch_n_tty_write(struct tty_struct *tty, struct file *file, const unsigned char *buf, size_t nr) { const unsigned char *b = buf; @@ -2415,6 +2415,12 @@ break_out: return (b - buf) ? b - buf : retval; } +static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, + const unsigned char *buf, size_t nr) +{ + return kpatch_n_tty_write(tty, file, buf, nr); +} + /** * n_tty_poll - poll method for N_TTY * @tty: terminal device kpatch-0.5.0/test/integration/ubuntu-16.04/new-globals.patch000066400000000000000000000017551321664017000235230ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/cmdline.c src/fs/proc/cmdline.c --- src.orig/fs/proc/cmdline.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/cmdline.c 2016-12-15 19:57:48.084000000 +0000 @@ -27,3 +27,10 @@ static int __init proc_cmdline_init(void return 0; } fs_initcall(proc_cmdline_init); + +#include +void kpatch_print_message(void) +{ + if (!jiffies) + printk("hello there!\n"); +} diff -Nupr src.orig/fs/proc/meminfo.c src/fs/proc/meminfo.c --- src.orig/fs/proc/meminfo.c 2016-12-15 19:55:39.084000000 +0000 +++ src/fs/proc/meminfo.c 2016-12-15 19:57:48.084000000 +0000 @@ -19,6 +19,8 @@ #include #include "internal.h" +void kpatch_print_message(void); + void __attribute__((weak)) arch_report_meminfo(struct seq_file *m) { } @@ -53,6 +55,7 @@ static int meminfo_proc_show(struct seq_ /* * Tagged format, for easy grepping and expansion. */ + kpatch_print_message(); seq_printf(m, "MemTotal: %8lu kB\n" "MemFree: %8lu kB\n" kpatch-0.5.0/test/integration/ubuntu-16.04/parainstructions-section.patch000066400000000000000000000006551321664017000263610ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/generic.c src/fs/proc/generic.c --- src.orig/fs/proc/generic.c 2016-12-15 19:55:39.076000000 +0000 +++ src/fs/proc/generic.c 2016-12-15 19:57:52.340000000 +0000 @@ -195,6 +195,7 @@ int proc_alloc_inum(unsigned int *inum) unsigned int i; int error; + printk("kpatch-test: testing change to .parainstructions section\n"); retry: if (!ida_pre_get(&proc_inum_ida, GFP_KERNEL)) return -ENOMEM; kpatch-0.5.0/test/integration/ubuntu-16.04/replace-section-references.patch000066400000000000000000000007461321664017000265040ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2016-12-15 19:55:57.436000000 +0000 +++ src/arch/x86/kvm/x86.c 2016-12-15 19:57:56.596000000 +0000 @@ -230,6 +230,8 @@ static void shared_msr_update(unsigned s void kvm_define_shared_msr(unsigned slot, u32 msr) { + if (!jiffies) + printk("kpatch kvm define shared msr\n"); BUG_ON(slot >= KVM_NR_SHARED_MSRS); shared_msrs_global.msrs[slot] = msr; if (slot >= shared_msrs_global.nr) kpatch-0.5.0/test/integration/ubuntu-16.04/shadow-newpid-LOADED.test000077500000000000000000000000551321664017000246630ustar00rootroot00000000000000#!/bin/bash grep -q newpid: /proc/$$/status kpatch-0.5.0/test/integration/ubuntu-16.04/shadow-newpid.patch000066400000000000000000000040461321664017000240560ustar00rootroot00000000000000diff -Nupr src.orig/fs/proc/array.c src/fs/proc/array.c --- src.orig/fs/proc/array.c 2016-12-15 19:55:39.080000000 +0000 +++ src/fs/proc/array.c 2016-12-15 19:58:00.840000000 +0000 @@ -334,13 +334,20 @@ static inline void task_seccomp(struct s #endif } +#include "kpatch.h" static inline void task_context_switch_counts(struct seq_file *m, struct task_struct *p) { + int *newpid; + seq_printf(m, "voluntary_ctxt_switches:\t%lu\n" "nonvoluntary_ctxt_switches:\t%lu\n", p->nvcsw, p->nivcsw); + + newpid = kpatch_shadow_get(p, "newpid"); + if (newpid) + seq_printf(m, "newpid:\t%d\n", *newpid); } static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) diff -Nupr src.orig/kernel/exit.c src/kernel/exit.c --- src.orig/kernel/exit.c 2016-12-15 19:56:00.184000000 +0000 +++ src/kernel/exit.c 2016-12-15 19:58:00.840000000 +0000 @@ -650,6 +650,7 @@ static void check_stack_usage(void) static inline void check_stack_usage(void) {} #endif +#include "kpatch.h" void do_exit(long code) { struct task_struct *tsk = current; @@ -758,6 +759,8 @@ void do_exit(long code) cgroup_exit(tsk); + kpatch_shadow_free(tsk, "newpid"); + /* * FIXME: do that only when needed, using sched_exit tracepoint */ diff -Nupr src.orig/kernel/fork.c src/kernel/fork.c --- src.orig/kernel/fork.c 2016-12-15 19:56:00.184000000 +0000 +++ src/kernel/fork.c 2016-12-15 19:58:00.840000000 +0000 @@ -1726,6 +1726,7 @@ struct task_struct *fork_idle(int cpu) * It copies the process, and if successful kick-starts * it and waits for it to finish using the VM if required. */ +#include "kpatch.h" long _do_fork(unsigned long clone_flags, unsigned long stack_start, unsigned long stack_size, @@ -1764,6 +1765,13 @@ long _do_fork(unsigned long clone_flags, if (!IS_ERR(p)) { struct completion vfork; struct pid *pid; + int *newpid; + static int ctr = 0; + + newpid = kpatch_shadow_alloc(p, "newpid", sizeof(*newpid), + GFP_KERNEL); + if (newpid) + *newpid = ctr++; trace_sched_process_fork(current, p); kpatch-0.5.0/test/integration/ubuntu-16.04/smp-locks-section.patch000066400000000000000000000007451321664017000246610ustar00rootroot00000000000000diff -Nupr src.orig/drivers/tty/tty_buffer.c src/drivers/tty/tty_buffer.c --- src.orig/drivers/tty/tty_buffer.c 2016-12-15 19:55:54.840000000 +0000 +++ src/drivers/tty/tty_buffer.c 2016-12-15 19:58:05.088000000 +0000 @@ -255,6 +255,8 @@ static int __tty_buffer_request_room(str struct tty_buffer *b, *n; int left, change; + if (!size) + printk("kpatch-test: testing .smp_locks section changes\n"); b = buf->tail; if (b->flags & TTYB_NORMAL) left = 2 * b->size - b->used; kpatch-0.5.0/test/integration/ubuntu-16.04/special-static-2.patch000066400000000000000000000012251321664017000243450ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2016-12-15 19:55:57.436000000 +0000 +++ src/arch/x86/kvm/x86.c 2016-12-15 19:58:09.352000000 +0000 @@ -2026,12 +2026,20 @@ static void record_steal_time(struct kvm &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)); } +void kpatch_kvm_x86_foo(void) +{ + if (!jiffies) + printk("kpatch kvm x86 foo\n"); +} + int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { bool pr = false; u32 msr = msr_info->index; u64 data = msr_info->data; + kpatch_kvm_x86_foo(); + switch (msr) { case MSR_AMD64_NB_CFG: case MSR_IA32_UCODE_REV: kpatch-0.5.0/test/integration/ubuntu-16.04/special-static.patch000066400000000000000000000010351321664017000242050ustar00rootroot00000000000000diff -Nupr src.orig/kernel/fork.c src/kernel/fork.c --- src.orig/kernel/fork.c 2016-12-15 19:56:00.184000000 +0000 +++ src/kernel/fork.c 2016-12-15 19:58:13.588000000 +0000 @@ -1143,10 +1143,18 @@ static void posix_cpu_timers_init_group( INIT_LIST_HEAD(&sig->cpu_timers[2]); } +void kpatch_foo(void) +{ + if (!jiffies) + printk("kpatch copy signal\n"); +} + static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) { struct signal_struct *sig; + kpatch_foo(); + if (clone_flags & CLONE_THREAD) return 0; kpatch-0.5.0/test/integration/ubuntu-16.04/tracepoints-section.patch000066400000000000000000000007361321664017000253040ustar00rootroot00000000000000diff -Nupr src.orig/kernel/time/timer.c src/kernel/time/timer.c --- src.orig/kernel/time/timer.c 2016-01-10 23:01:32.000000000 +0000 +++ src/kernel/time/timer.c 2016-12-15 20:27:00.368000000 +0000 @@ -1433,6 +1433,9 @@ static void run_timer_softirq(struct sof { struct tvec_base *base = this_cpu_ptr(&tvec_bases); + if (!base) + printk("kpatch-test: testing __tracepoints section changes\n"); + if (time_after_eq(jiffies, base->timer_jiffies)) __run_timers(base); } kpatch-0.5.0/test/integration/ubuntu-16.04/warn-detect-FAIL.patch000066400000000000000000000004151321664017000242270ustar00rootroot00000000000000diff -Nupr src.orig/arch/x86/kvm/x86.c src/arch/x86/kvm/x86.c --- src.orig/arch/x86/kvm/x86.c 2016-12-15 19:55:57.436000000 +0000 +++ src/arch/x86/kvm/x86.c 2016-12-15 19:58:17.844000000 +0000 @@ -1,3 +1,4 @@ + /* * Kernel-based Virtual Machine driver for Linux * kpatch-0.5.0/test/testmod/000077500000000000000000000000001321664017000154245ustar00rootroot00000000000000kpatch-0.5.0/test/testmod/Makefile000066400000000000000000000012321321664017000170620ustar00rootroot00000000000000BUILD ?= /lib/modules/$(shell uname -r)/build testmod.ko: testmod_drv.c patch < patch KCFLAGS="-ffunction-sections -fdata-sections" $(MAKE) -C $(BUILD) M=$(PWD) testmod.ko strip --keep-file-symbols -d testmod_drv.o cp testmod_drv.o testmod_drv.o.patched patch -R < patch KCFLAGS="-ffunction-sections -fdata-sections" $(MAKE) -C $(BUILD) M=$(PWD) testmod.ko strip --keep-file-symbols -d testmod_drv.o cp testmod_drv.o testmod_drv.o.orig $(MAKE) -C $(BUILD) M=$(PWD) clean $(MAKE) -C $(BUILD) M=$(PWD) testmod.ko all: testmod.ko clean: $(MAKE) -C $(BUILD) M=$(PWD) clean rm *.orig *.patched # kbuild rules obj-m := testmod.o testmod-y := testmod_drv.o kpatch-0.5.0/test/testmod/README000066400000000000000000000002171321664017000163040ustar00rootroot00000000000000To test, run ./doit.sh from the current directory. To test on a remote system, set remote system using REMOTE in doit.sh. Then run ./doit.sh. kpatch-0.5.0/test/testmod/doit-client.sh000077500000000000000000000007351321664017000202030ustar00rootroot00000000000000#!/bin/bash #set -x rmmod testmod 2> /dev/null rmmod kpatch 2> /dev/null insmod testmod.ko || exit 1 insmod kpatch.ko || exit 1 if [[ "$(cat /sys/kernel/testmod/value)" != "2" ]] then exit 1 fi insmod kpatch-patch.ko dmesg | tail if [[ "$(cat /sys/kernel/testmod/value)" != "3" ]] then exit 1 fi echo 0 > /sys/kernel/kpatch/kpatch_patch/enabled rmmod kpatch-patch if [[ "$(cat /sys/kernel/testmod/value)" != "2" ]] then exit 1 fi rmmod kpatch rmmod testmod echo "SUCCESS" kpatch-0.5.0/test/testmod/doit.sh000077500000000000000000000022051321664017000167210ustar00rootroot00000000000000#!/bin/bash #set -x # If testing on a remote machine, set it here # Probably want to use preshared keys. unset REMOTE #REMOTE="192.168.100.150" cd ../../ || exit 1 make clean || exit 1 make || exit 1 cd test/testmod || exit 1 make || exit 1 ../../kpatch-build/create-diff-object testmod_drv.o.orig testmod_drv.o.patched testmod.ko output.o || exit 1 cd ../../kmod/patch || exit 1 make clean || exit 1 cp ../../test/testmod/output.o . || exit 1 md5sum output.o | awk '{printf "%s\0", $1}' > checksum.tmp || exit 1 objcopy --add-section .kpatch.checksum=checksum.tmp --set-section-flags .kpatch.checksum=alloc,load,contents,readonly output.o || exit 1 rm -f checksum.tmp KBUILD_EXTRA_SYMBOLS="$(readlink -e ../../kmod/core/Module.symvers)" make || exit 1 cd ../../test/testmod if [[ -z "$REMOTE" ]] then cp ../../kmod/core/kpatch.ko . cp ../../kmod/patch/kpatch-patch.ko . sudo ./doit-client.sh else scp ../../kmod/core/kpatch.ko root@$REMOTE:~/. || exit 1 scp ../../kmod/patch/kpatch-patch.ko root@$REMOTE:~/. || exit 1 scp testmod.ko root@$REMOTE:~/. || exit 1 scp doit-client.sh root@$REMOTE:~/. || exit 1 ssh root@$REMOTE ./doit-client.sh fi kpatch-0.5.0/test/testmod/patch000066400000000000000000000006221321664017000164460ustar00rootroot00000000000000--- testmod_drv.c.orig 2014-06-02 16:49:49.428509600 -0500 +++ testmod_drv.c 2014-06-02 16:49:56.973656791 -0500 @@ -11,7 +11,7 @@ static ssize_t value_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { - return sprintf(buf, "%d\n", value); + return sprintf(buf, "%d\n", value+1); } static struct kobj_attribute testmod_value_attr = __ATTR_RO(value); kpatch-0.5.0/test/testmod/testmod_drv.c000066400000000000000000000016121321664017000201220ustar00rootroot00000000000000#define pr_fmt(fmt) "testmod: " fmt #include #include #include #include static struct kobject *testmod_kobj; int value = 2; static ssize_t value_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return sprintf(buf, "%d\n", value); } static struct kobj_attribute testmod_value_attr = __ATTR_RO(value); static int testmod_init(void) { int ret; testmod_kobj = kobject_create_and_add("testmod", kernel_kobj); if (!testmod_kobj) return -ENOMEM; ret = sysfs_create_file(testmod_kobj, &testmod_value_attr.attr); if (ret) { kobject_put(testmod_kobj); return ret; } return 0; } static void testmod_exit(void) { sysfs_remove_file(testmod_kobj, &testmod_value_attr.attr); kobject_put(testmod_kobj); } module_init(testmod_init); module_exit(testmod_exit); MODULE_LICENSE("GPL");