pax_global_header00006660000000000000000000000064126611640160014515gustar00rootroot0000000000000052 comment=706b63ad99cc00c539970961b1950da303cb1f3b kpatch-0.3.2/000077500000000000000000000000001266116401600127715ustar00rootroot00000000000000kpatch-0.3.2/.gitignore000066400000000000000000000002511266116401600147570ustar00rootroot00000000000000*.o *.o.cmd *.o.d *.ko *.ko.cmd *.mod.c *.swp *.d .tmp_versions Module.symvers kpatch-build/lookup kpatch-build/create-diff-object man/kpatch.1.gz man/kpatch-build.1.gz kpatch-0.3.2/COPYING000066400000000000000000000432541266116401600140340ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. kpatch-0.3.2/Makefile000066400000000000000000000011201266116401600144230ustar00rootroot00000000000000include Makefile.inc SUBDIRS = kpatch-build kpatch kmod man contrib BUILD_DIRS = $(SUBDIRS:%=build-%) INSTALL_DIRS = $(SUBDIRS:%=install-%) UNINSTALL_DIRS = $(SUBDIRS:%=uninstall-%) CLEAN_DIRS = $(SUBDIRS:%=clean-%) .PHONY: $(SUBDIRS) $(BUILD_DIRS) $(INSTALL_DIRS) $(CLEAN_DIRS) all: $(BUILD_DIRS) $(BUILD_DIRS): $(MAKE) -C $(@:build-%=%) install: $(INSTALL_DIRS) $(INSTALL_DIRS): $(MAKE) -C $(@:install-%=%) install uninstall: $(UNINSTALL_DIRS) $(UNINSTALL_DIRS): $(MAKE) -C $(@:uninstall-%=%) uninstall clean: $(CLEAN_DIRS) $(CLEAN_DIRS): $(MAKE) -C $(@:clean-%=%) clean kpatch-0.3.2/Makefile.inc000066400000000000000000000007641266116401600152100ustar00rootroot00000000000000SHELL = /bin/sh CC = gcc INSTALL = /usr/bin/install PREFIX ?= /usr/local LIBDIR ?= lib LIBEXEC ?= libexec BINDIR = $(DESTDIR)$(PREFIX)/bin SBINDIR = $(DESTDIR)$(PREFIX)/sbin MODULESDIR = $(DESTDIR)$(PREFIX)/$(LIBDIR)/kpatch LIBEXECDIR = $(DESTDIR)$(PREFIX)/$(LIBEXEC)/kpatch DATADIR = $(DESTDIR)$(PREFIX)/share/kpatch MANDIR = $(DESTDIR)$(PREFIX)/share/man/man1 SYSTEMDDIR = $(DESTDIR)$(PREFIX)/lib/systemd/system BUILDMOD ?= yes .PHONY: all install clean .DEFAULT: all kpatch-0.3.2/README.md000066400000000000000000000526351266116401600142630ustar00rootroot00000000000000kpatch: dynamic kernel patching =============================== kpatch is a Linux dynamic kernel patching infrastructure which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, for users to log off, or for scheduled reboot windows. It gives more control over uptime without sacrificing security or stability. **WARNING: Use with caution! Kernel crashes, spontaneous reboots, and data loss may occur!** Here's a video of kpatch in action: [![kpatch video](http://img.youtube.com/vi/juyQ5TsJRTA/0.jpg)](http://www.youtube.com/watch?v=juyQ5TsJRTA) And a few more: - https://www.youtube.com/watch?v=rN0sFjrJQfU - https://www.youtube.com/watch?v=Mftc80KyjA4 Installation ------------ ###Prerequisites ####Fedora 23 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash sudo dnf install gcc kernel-devel elfutils elfutils-devel ``` Install the dependencies for the "kpatch-build" command: ```bash sudo dnf install rpmdevtools pesign yum-utils openssl wget numactl-devel sudo dnf builddep kernel sudo dnf debuginfo-install kernel # optional, but highly recommended sudo dnf install ccache ccache --max-size=5G ``` ####RHEL 7 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash sudo yum install gcc kernel-devel elfutils elfutils-devel ``` Install the dependencies for the "kpatch-build" command: ```bash sudo yum-config-manager --enable rhel-7-server-optional-rpms sudo yum install rpmdevtools pesign yum-utils zlib-devel \ binutils-devel newt-devel python-devel perl-ExtUtils-Embed \ audit-libs-devel numactl-devel pciutils-devel bison ncurses-devel sudo yum-builddep kernel sudo debuginfo-install kernel # optional, but highly recommended sudo yum install https://dl.fedoraproject.org/pub/epel/7/x86_64/c/ccache-3.1.9-3.el7.x86_64.rpm ccache --max-size=5G ``` ####CentOS 7 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash sudo yum install gcc kernel-devel elfutils elfutils-devel ``` Install the dependencies for the "kpatch-build" command: ```bash sudo yum install rpmdevtools pesign yum-utils zlib-devel \ binutils-devel newt-devel python-devel perl-ExtUtils-Embed \ audit-libs audit-libs-devel numactl-devel pciutils-devel bison # enable CentOS 7 debug repo sudo yum-config-manager --enable debug sudo yum-builddep kernel sudo debuginfo-install kernel # optional, but highly recommended - enable EPEL 7 sudo yum install ccache ccache --max-size=5G ``` ####Oracle Linux 7 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash sudo yum install gcc kernel-devel elfutils elfutils-devel ``` Install the dependencies for the "kpatch-build" command: ```bash sudo yum install rpmdevtools pesign yum-utils zlib-devel \ binutils-devel newt-devel python-devel perl-ExtUtils-Embed \ audit-libs numactl-devel pciutils-devel bison # enable ol7_optional_latest repo sudo yum-config-manager --enable ol7_optional_latest sudo yum-builddep kernel # manually install kernel debuginfo packages rpm -ivh https://oss.oracle.com/ol7/debuginfo/kernel-debuginfo-$(uname -r).rpm rpm -ivh https://oss.oracle.com/ol7/debuginfo/kernel-debuginfo-common-x86_64-$(uname -r).rpm # optional, but highly recommended - enable EPEL 7 sudo yum install ccache ccache --max-size=5G ``` ####Ubuntu 14.04 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: ```bash apt-get install make gcc libelf-dev ``` Install the dependencies for the "kpatch-build" command: ```bash apt-get install dpkg-dev apt-get build-dep linux # optional, but highly recommended apt-get install ccache ccache --max-size=5G ``` Install kernel debug symbols: ```bash # Add ddebs repository codename=$(lsb_release -sc) sudo tee /etc/apt/sources.list.d/ddebs.list << EOF deb http://ddebs.ubuntu.com/ ${codename} main restricted universe multiverse deb http://ddebs.ubuntu.com/ ${codename}-security main restricted universe multiverse deb http://ddebs.ubuntu.com/ ${codename}-updates main restricted universe multiverse deb http://ddebs.ubuntu.com/ ${codename}-proposed main restricted universe multiverse EOF # add APT key wget -Nq http://ddebs.ubuntu.com/dbgsym-release-key.asc -O- | sudo apt-key add - apt-get update && apt-get install linux-image-$(uname -r)-dbgsym ``` ####Debian 8.0 *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Install the dependencies for compiling kpatch: apt-get install make gcc libelf-dev build-essential Install and prepare the kernel sources: ```bash apt-get install linux-source-$(uname -r) cd /usr/src && tar xvf linux-source-$(uname -r).tar.xz && ln -s linux-source-$(uname -r) linux && cd linux cp /boot/config-$(uname -r) .config for OPTION in CONFIG_KALLSYMS_ALL CONFIG_FUNCTION_TRACER ; do sed -i "s/# $OPTION is not set/$OPTION=y/g" .config ; done sed -i "s/^SUBLEVEL.*/SUBLEVEL =/" Makefile make -j`getconf _NPROCESSORS_CONF` deb-pkg KDEB_PKGVERSION=$(uname -r).9-1 ``` Install the kernel packages and reboot dpkg -i /usr/src/*.deb reboot Install the dependencies for the "kpatch-build" command: apt-get install dpkg-dev apt-get build-dep linux # optional, but highly recommended apt-get install ccache ccache --max-size=5G ####Debian 7.x *NOTE: You'll need about 15GB of free disk space for the kpatch-build cache in `~/.kpatch` and for ccache.* Add backports repositories: ```bash echo "deb http://http.debian.net/debian wheezy-backports main" > /etc/apt/sources.list.d/wheezy-backports.list echo "deb http://packages.incloudus.com backports-incloudus main" > /etc/apt/sources.list.d/incloudus.list wget http://packages.incloudus.com/incloudus/incloudus.pub -O- | apt-key add - aptitude update ``` Install the linux kernel, symbols and gcc 4.9: aptitude install -t wheezy-backports -y initramfs-tools aptitude install -y gcc gcc-4.9 g++-4.9 linux-image-3.14 linux-image-3.14-dbg Configure gcc 4.9 as the default gcc compiler: update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.7 20 update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 50 update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.7 20 update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.9 50 Install kpatch and these dependencies: aptitude install kpatch Configure ccache (installed by kpatch package): ccache --max-size=5G ###Build Compile kpatch: make ###Install OPTIONAL: Install kpatch to `/usr/local`: sudo make install Alternatively, the kpatch and kpatch-build scripts can be run directly from the git tree. Quick start ----------- > NOTE: While kpatch is designed to work with any recent Linux kernel on any distribution, the `kpatch-build` command has **ONLY** been tested and confirmed to work on Fedora 20 and later, RHEL 7, Oracle Linux 7, CentOS 7 and Ubuntu 14.04. First, make a source code patch against the kernel tree using diff, git, or quilt. As a contrived example, let's patch /proc/meminfo to show VmallocChunk in ALL CAPS so we can see it better: $ cat meminfo-string.patch Index: src/fs/proc/meminfo.c =================================================================== --- src.orig/fs/proc/meminfo.c +++ src/fs/proc/meminfo.c @@ -95,7 +95,7 @@ static int meminfo_proc_show(struct seq_ "Committed_AS: %8lu kB\n" "VmallocTotal: %8lu kB\n" "VmallocUsed: %8lu kB\n" - "VmallocChunk: %8lu kB\n" + "VMALLOCCHUNK: %8lu kB\n" #ifdef CONFIG_MEMORY_FAILURE "HardwareCorrupted: %5lu kB\n" #endif Build the patch module: $ kpatch-build -t vmlinux meminfo-string.patch Using cache at /home/jpoimboe/.kpatch/3.13.10-200.fc20.x86_64/src Testing patch file checking file fs/proc/meminfo.c Building original kernel Building patched kernel Detecting changed objects Rebuilding changed objects Extracting new and modified ELF sections meminfo.o: changed function: meminfo_proc_show Building patch module: kpatch-meminfo-string.ko SUCCESS > NOTE: The `-t vmlinux` option is used to tell `kpatch-build` to only look for > changes in the `vmlinux` base kernel image, which is much faster than also > compiling all the kernel modules. If your patch affects a kernel module, you > can either omit this option to build everything, and have `kpatch-build` > detect which modules changed, or you can specify the affected kernel build > targets with multiple `-t` options. That outputs a patch module named `kpatch-meminfo-string.ko` in the current directory. Now apply it to the running kernel: $ sudo kpatch load kpatch-meminfo-string.ko loading core module: /usr/local/lib/modules/3.13.10-200.fc20.x86_64/kpatch/kpatch.ko loading patch module: kpatch-meminfo-string.ko Done! The kernel is now patched. $ grep -i chunk /proc/meminfo VMALLOCCHUNK: 34359337092 kB Patch Author Guide ------------------ Unfortunately, live patching isn't always as easy as the previous example, and can have some major pitfalls if you're not careful. To learn more about how to properly create live patches, see the [Patch Author Guide](doc/patch-author-guide.md). How it works ------------ kpatch works at a function granularity: old functions are replaced with new ones. It has four main components: - **kpatch-build**: a collection of tools which convert a source diff patch to a patch module. They work by compiling the kernel both with and without the source patch, comparing the binaries, and generating a patch module which includes new binary versions of the functions to be replaced. - **patch module**: a kernel module (.ko file) which includes the replacement functions and metadata about the original functions. - **kpatch core module**: a kernel module (.ko file) which provides an interface for the patch modules to register new functions for replacement. It uses the kernel ftrace subsystem to hook into the original function's mcount call instruction, so that a call to the original function is redirected to the replacement function. - **kpatch utility:** a command-line tool which allows a user to manage a collection of patch modules. One or more patch modules may be configured to load at boot time, so that a system can remain patched even after a reboot into the same version of the kernel. ### kpatch-build The "kpatch-build" command converts a source-level diff patch file to a kernel patch module. Most of its work is performed by the kpatch-build script which uses a utility named `create-diff-object` to compare changed objects. The primary steps in kpatch-build are: - Build the unstripped vmlinux for the kernel - Patch the source tree - Rebuild vmlinux and monitor which objects are being rebuilt. These are the "changed objects". - Recompile each changed object with `-ffunction-sections -fdata-sections`, resulting in the changed patched objects - Unpatch the source tree - Recompile each changed object with `-ffunction-sections -fdata-sections`, resulting in the changed original objects - For every changed object, use `create-diff-object` to do the following: * Analyze each original/patched object pair for patchability * Add `.kpatch.funcs` and `.rela.kpatch.funcs` sections to the output object. The kpatch core module uses this to determine the list of functions that need to be redirected using ftrace. * Add `.kpatch.dynrelas` and `.rela.kpatch.dynrelas` sections to the output object. This will be used to resolve references to non-included local and non-exported global symbols. These relocations will be resolved by the kpatch core module. * Generate the resulting output object containing the new and modified sections - Link all the output objects into a cumulative object - Generate the patch module ### Patching The patch modules register with the core module (`kpatch.ko`). They provide information about original functions that need to be replaced, and corresponding function pointers to the replacement functions. The core module registers a handler function with ftrace. The handler function is called by ftrace immediately before the original function begins executing. This occurs with the help of the reserved mcount call at the beginning of every function, created by the gcc `-mfentry` flag. The ftrace handler then modifies the return instruction pointer (IP) address on the stack and returns to ftrace, which then restores the original function's arguments and stack, and "returns" to the new function. Limitations ----------- - Patches which modify init functions (annotated with `__init`) are not supported. kpatch-build will return an error if the patch attempts to do so. - Patches which modify statically allocated data are not supported. kpatch-build will detect that and return an error. (In the future we will add a facility to support it. It will probably require the user to write code which runs at patch module loading time which manually updates the data.) - Patches which change the way a function interacts with dynamically allocated data might be safe, or might not. It isn't possible for kpatch-build to verify the safety of this kind of patch. It's up to the user to understand what the patch does, whether the new functions interact with dynamically allocated data in a different way than the old functions did, and whether it would be safe to atomically apply such a patch to a running kernel. - Patches which modify functions in vdso are not supported. These run in user-space and ftrace can't hook them. - Some incompatibilities currently exist between kpatch and usage of ftrace and kprobes. See the Frequently Asked Questions section for more details. Frequently Asked Questions -------------------------- **Q. What's the relationship between kpatch and the upstream Linux live kernel patching component (livepatch)?** Starting with Linux 4.0, the Linux kernel has livepatch, which is a new converged live kernel patching framework. Livepatch is similar in functionality to the kpatch core module, though it doesn't yet have all the features that kpatch does. kpatch-build already works with both livepatch and kpatch. If your kernel has CONFIG\_LIVEPATCH enabled, it detects that and builds a patch module in the livepatch format. Otherwise it builds a kpatch patch module. Soon the kpatch script will also support both patch module formats (TODO issue [#479](https://github.com/dynup/kpatch/issues/479)). **Q. Isn't this just a virus/rootkit injection framework?** kpatch uses kernel modules to replace code. It requires the `CAP_SYS_MODULE` capability. If you already have that capability, then you already have the ability to arbitrarily modify the kernel, with or without kpatch. **Q. How can I detect if somebody has patched the kernel?** When a patch module is loaded, the `TAINT_USER` flag is set. To test for it, `cat /proc/sys/kernel/tainted` and check to see if the value of 64 has been OR'ed in. Eventually we hope to have a dedicated `TAINT_KPATCH` flag instead. Note that the `TAINT_OOT_MODULE` flag (4096) will also be set, since the patch module is built outside the Linux kernel source tree. If your patch module is unsigned, the `TAINT_FORCED_MODULE` flag (2) will also be set. Starting with Linux 3.15, this will be changed to the more specific `TAINT_UNSIGNED_MODULE` (8192). **Q. Will it destabilize my system?** No, as long as the patch is chosen carefully. See the Limitations section above. **Q. Why does kpatch use ftrace to jump to the replacement function instead of adding the jump directly?** ftrace owns the first "call mcount" instruction of every kernel function. In order to keep compatibility with ftrace, we go through ftrace rather than updating the instruction directly. This approach also ensures that the code modification path is reliable, since ftrace has been doing it successfully for years. **Q Is kpatch compatible with \?** We aim to be good kernel citizens and maintain compatibility. A kpatch replacement function is no different than a function loaded by any other kernel module. Each replacement function has its own symbol name and kallsyms entry, so it looks like a normal function to the kernel. - **oops stack traces**: Yes. If the replacement function is involved in an oops, the stack trace will show the function and kernel module name of the replacement function, just like any other kernel module function. The oops message will also show the taint flag (currently `TAINT_USER`). - **kdump/crash**: Yes. Replacement functions are normal functions, so crash will have no issues. - **ftrace**: Yes, but certain uses of ftrace which involve opening the `/sys/kernel/debug/tracing/trace` file or using `trace-cmd record` can result in a tiny window of time where a patch gets temporarily disabled. Therefore it's a good idea to avoid using ftrace on a patched system until this issue is resolved. - **systemtap/kprobes**: Some incompatibilities exist. - If you setup a kprobe module at the beginning of a function before loading a kpatch module, and they both affect the same function, kprobes "wins" until the kprobe has been unregistered. This is tracked in issue [#47](https://github.com/dynup/kpatch/issues/47). - Setting a kretprobe before loading a kpatch module could be unsafe. See issue [#67](https://github.com/dynup/kpatch/issues/67). - **perf**: Yes. - **tracepoints**: Patches to a function which uses tracepoints will result in the tracepoints being effectively disabled as long as the patch is applied. **Q. Why not use something like kexec instead?** If you want to avoid a hardware reboot, but are ok with restarting processes, kexec is a good alternative. **Q. If an application can't handle a reboot, it's designed wrong.** That's a good poi... [system reboots] **Q. What changes are needed in other upstream projects?** We hope to make the following changes to other projects: - kernel: - ftrace improvements to close any windows that would allow a patch to be inadvertently disabled **Q: Is it possible to register a function that gets called atomically with `stop_machine` when the patch module loads and unloads?** We do have plans to implement something like that. **Q. What kernels are supported?** kpatch needs gcc >= 4.8 and Linux >= 3.9. **Q. Is it possible to remove a patch?** Yes. Just run `kpatch unload` which will disable and unload the patch module and restore the function to its original state. **Q. Can you apply multiple patches?** Yes, but to prevent any unexpected interactions between multiple patch modules, it's recommended that patch upgrades are cumulative, so that each patch is a superset of the previous patch. This can be achieved by combining the new patch with the previous patch using `combinediff` before running `kpatch-build`. **Q. Why did kpatch-build detect a changed function that wasn't touched by the source patch?** There could be a variety of reasons for this, such as: - The patch changed an inline function. - The compiler decided to inline a changed function, resulting in the outer function getting recompiled. This is common in the case where the inner function is static and is only called once. **Q. How do I patch a function which is always on the stack of at least one task, such as schedule(), sys_poll(), sys_select(), sys_read(), sys_nanosleep(), etc?** - If you're sure it would be safe for the old function and the new function to run simultaneously, use the `KPATCH_FORCE_UNSAFE` macro to skip the activeness safety check for the function. See `kmod/patch/kpatch-macros.h` for more details. **Q. Are patching of kernel modules supported?** - Yes. **Q. Can you patch out-of-tree modules?** - Yes, though it's currently a bit of a manual process. See this [message](https://www.redhat.com/archives/kpatch/2015-June/msg00004.html) on the kpatch mailing list for more information. Get involved ------------ If you have questions or feedback, join the #kpatch IRC channel on freenode and say hi. We also have a [mailing list](https://www.redhat.com/mailman/listinfo/kpatch). Contributions are very welcome. Feel free to open issues or PRs on github. For big PRs, it's a good idea to discuss them first in github issues or on the [mailing list](https://www.redhat.com/mailman/listinfo/kpatch) before you write a lot of code. License ------- kpatch is under the GPLv2 license. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. kpatch-0.3.2/contrib/000077500000000000000000000000001266116401600144315ustar00rootroot00000000000000kpatch-0.3.2/contrib/Makefile000066400000000000000000000003551266116401600160740ustar00rootroot00000000000000include ../Makefile.inc all: install: all $(INSTALL) -d $(SYSTEMDDIR) $(INSTALL) -m 0644 kpatch.service $(SYSTEMDDIR) sed -i 's~PREFIX~$(PREFIX)~' $(SYSTEMDDIR)/kpatch.service uninstall: $(RM) $(SYSTEMDDIR)/kpatch.service clean: kpatch-0.3.2/contrib/kpatch.service000066400000000000000000000003241266116401600172640ustar00rootroot00000000000000[Unit] Description="Apply kpatch kernel patches" [Service] Type=oneshot RemainAfterExit=yes ExecStart=PREFIX/sbin/kpatch load --all ExecStop=PREFIX/sbin/kpatch unload --all [Install] WantedBy=multi-user.target kpatch-0.3.2/contrib/kpatch.spec000066400000000000000000000070141266116401600165610ustar00rootroot00000000000000Name: kpatch Summary: Dynamic kernel patching Version: 0.3.2 License: GPLv2 Group: System Environment/Kernel URL: http://github.com/dynup/kpatch Release: 1%{?dist} Source0: %{name}-%{version}.tar.gz Requires: kmod bash BuildRequires: gcc kernel-devel elfutils elfutils-devel BuildRoot: %(mktemp -ud %{_tmppath}/%{name}-%{version}-%{release}-XXXXXX) # needed for the kernel specific module %define KVER %(uname -r) %description kpatch is a Linux dynamic kernel patching tool which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or for scheduled reboot windows. It gives more control over up-time without sacrificing security or stability. %package runtime Summary: Dynamic kernel patching Buildarch: noarch Provides: %{name} = %{version} %description runtime kpatch is a Linux dynamic kernel patching tool which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or for scheduled reboot windows. It gives more control over up-time without sacrificing security or stability. %package build Requires: %{name} Summary: Dynamic kernel patching %description build kpatch is a Linux dynamic kernel patching tool which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or for scheduled reboot windows. It gives more control over up-time without sacrificing security or stability. %package %{KVER} Requires: %{name} Summary: Dynamic kernel patching %description %{KVER} kpatch is a Linux dynamic kernel patching tool which allows you to patch a running kernel without rebooting or restarting any processes. It enables sysadmins to apply critical security patches to the kernel immediately, without having to wait for long-running tasks to complete, users to log off, or for scheduled reboot windows. It gives more control over up-time without sacrificing security or stability. %prep %setup -q %build make %{_smp_mflags} %install rm -rf %{buildroot} make install PREFIX=/%{_usr} DESTDIR=%{buildroot} %clean rm -rf %{buildroot} %files runtime %defattr(-,root,root,-) %doc COPYING README.md %{_sbindir}/kpatch %{_mandir}/man1/kpatch.1* %{_usr}/lib/systemd/system/* %files %{KVER} %defattr(-,root,root,-) %{_usr}/lib/kpatch/%{KVER} %files build %defattr(-,root,root,-) %{_bindir}/* %{_libexecdir}/* %{_datadir}/%{name} %{_mandir}/man1/kpatch-build.1* %changelog * Wed Dec 3 2014 Josh Poimboeuf - 0.2.2-1 - rebased to current version * Tue Sep 2 2014 Josh Poimboeuf - 0.2.1-1 - rebased to current version * Mon Jul 28 2014 Josh Poimboeuf - 0.1.9-1 - moved core module to /usr/lib/kpatch - rebased to current version * Mon Jul 07 2014 Udo Seidel - 0.1.7-1 - rebased to current version * Sat May 24 2014 Udo Seidel - 0.1.1-1 - rebased to current version * Thu Apr 10 2014 Udo Seidel - 0.0.1-3 - added dracut module * Tue Mar 25 2014 Udo Seidel - 0.0.1-2 - added man pages * Sat Mar 22 2014 Udo Seidel - 0.0.1-1 - initial release kpatch-0.3.2/doc/000077500000000000000000000000001266116401600135365ustar00rootroot00000000000000kpatch-0.3.2/doc/patch-author-guide.md000066400000000000000000000123421266116401600175540ustar00rootroot00000000000000kpatch Patch Author Guide ========================= Because kpatch-build is relatively easy to use, it can be easy to assume that a successful patch module build means that the patch is safe to apply. But in fact that's a very dangerous assumption. There are many pitfalls that can be encountered when creating a live patch. This document attempts to guide the patch creation process. It's a work in progress. If you find it useful, please contribute! Patch Analysis -------------- kpatch provides _some_ guarantees, but it does not guarantee that all patches are safe to apply. Every patch must also be analyzed in-depth by a human. The most important point here cannot be stressed enough. Here comes the bold: **Do not blindly apply patches. There is no subsitute for human analysis and reasoning on a per-patch basis. All patches must be thoroughly analyzed by a human kernel expert who completely understands the patch and the affected code and how they relate to the live patching environment.** kpatch vs livepatch vs kGraft ----------------------------- This document assumes that the kpatch core module is being used. Other live patching systems (e.g., livepatch and kGraft) have different consistency models. Each comes with its own guarantees, and there are some subtle differences. The guidance in this document applies **only** to kpatch. Patch upgrades -------------- Due to potential unexpected interactions between patches, it's highly recommended that when patching a system which has already been patched, the second patch should be a cumulative upgrade which is a superset of the first patch. Data structure changes ---------------------- kpatch patches functions, not data. If the original patch involves a change to a data structure, the patch will require some rework, as changes to data structures are not allowed by default. Usually you have to get creative. There are several possible ways to handle this: ### Change the code which uses the data structure Sometimes, instead of changing the data structure itself, you can change the code which uses it. For example, consider this [patch](http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=54a20552e1eae07aa240fa370a0293e006b5faed). which has the following hunk: ``` @@ -3270,6 +3277,7 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = { [SVM_EXIT_EXCP_BASE + PF_VECTOR] = pf_interception, [SVM_EXIT_EXCP_BASE + NM_VECTOR] = nm_interception, [SVM_EXIT_EXCP_BASE + MC_VECTOR] = mc_interception, + [SVM_EXIT_EXCP_BASE + AC_VECTOR] = ac_interception, [SVM_EXIT_INTR] = intr_interception, [SVM_EXIT_NMI] = nmi_interception, [SVM_EXIT_SMI] = nop_on_interception, ``` `svm_exit_handlers[]` is an array of function pointers. The patch adds a `ac_interception` function pointer to the array at index `[SVM_EXIT_EXCP_BASE + AC_VECTOR]`. That change is incompatible with kpatch. Looking at the source file, we can see that this function pointer is only accessed by a single function, `handle_exit()`: ``` if (exit_code >= ARRAY_SIZE(svm_exit_handlers) || !svm_exit_handlers[exit_code]) { WARN_ONCE(1, "svm: unexpected exit reason 0x%x\n", exit_code); kvm_queue_exception(vcpu, UD_VECTOR); return 1; } return svm_exit_handlers[exit_code](svm); ``` So an easy solution here is to just change the code to manually check for the new case before looking in the data structure: ``` @@ -3580,6 +3580,9 @@ static int handle_exit(struct kvm_vcpu *vcpu) return 1; } + if (exit_code == SVM_EXIT_EXCP_BASE + AC_VECTOR) + return ac_interception(svm); + return svm_exit_handlers[exit_code](svm); } ``` Not only is this an easy solution, it's also safer than touching data since kpatch creates a barrier between the calling of old functions and new functions. ### Use a kpatch load hook If you need to change the contents of an existing variable in-place, you can use the KPATCH_LOAD_HOOK macro to specify a function to be called when the patch module is loaded. Don't forget to protect access to the data as needed. Also be careful when upgrading. If patch A has a load hook which writes to X, and then you load patch B which is a superset of A, in some cases you may want to prevent patch B from writing to X, if A is already loaded. Examples needed. ### Use a shadow variable If you need to add a field to an existing data structure, or even many existing data structures, you can use the `kpatch_shadow_*()` functions. Example needed (see shadow-newpid.patch in the integration tests directory). Data semantic changes --------------------- Sometimes, the data itself remains the same, but how it's used is changed. A common example is locking semantic changes. Example needed. Init code changes ----------------- Any code which runs in an `__init` function or during module or device initialization is problematic, as it may have already run before the patch was applied. The patch may require a load hook function which detects whether such init code has run, and which rewrites or changes the original initialization to force it into the desired state. Some changes involving hardware init are inherently incompatible with live patching. kpatch-0.3.2/examples/000077500000000000000000000000001266116401600146075ustar00rootroot00000000000000kpatch-0.3.2/examples/tcp_cubic-better-follow-cubic-curve-converted.patch000066400000000000000000000071761266116401600265550ustar00rootroot00000000000000The original patch changes the initialization of 'cubictcp' instance of struct tcp_congestion_ops ('cubictcp.cwnd_event' field). Kpatch intentionally rejects to process such changes. This modification of the patch uses Kpatch load/unload hooks to set 'cubictcp.cwnd_event' when the binary patch is loaded and reset it to NULL when the patch is unloaded. It is still needed to check if changing that field could be problematic due to concurrency issues, etc. 'cwnd_event' callback is used only via tcp_ca_event() function. include/net/tcp.h: static inline void tcp_ca_event(struct sock *sk, const enum tcp_ca_event event) { const struct inet_connection_sock *icsk = inet_csk(sk); if (icsk->icsk_ca_ops->cwnd_event) icsk->icsk_ca_ops->cwnd_event(sk, event); } In turn, tcp_ca_event() is called in a number of places in net/ipv4/tcp_output.c and net/ipv4/tcp_input.c. One problem with this modification of the patch is that it may not be safe to unload it. If it is possible for tcp_ca_event() to run concurrently with the unloading of the patch, it may happen that 'icsk->icsk_ca_ops->cwnd_event' is the address of bictcp_cwnd_event() when tcp_ca_event() checks it but is set to NULL right after. So 'icsk->icsk_ca_ops->cwnd_event(sk, event)' would result in a kernel oops. Whether such scenario is possible or not, it should be analyzed. If it is, then at least, the body of tcp_ca_event() should be made atomic w.r.t. changing 'cwnd_event' in the patch somehow. Perhaps, RCU could be suitable for that: a read-side critical section for the body of tcp_ca_event() with a single read of icsk->icsk_ca_ops->cwnd_event pointer with rcu_dereference(). The pointer could be set by the patch with rcu_assign_pointer(). An alternative suggested by Josh Poimboeuf would be to patch the functions that call 'cwnd_event' callback (tcp_ca_event() in this case) so that they call bictcp_cwnd_event() directly when they detect the cubictcp struct [1]. Note that tcp_ca_event() is inlined in a number of places, so the binary patch will provide replacements for all of the corresponding functions rather than for just one. It is still needed to check if replacing these functions in runtime is safe. References: [1] https://www.redhat.com/archives/kpatch/2015-September/msg00005.html diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c index 894b7ce..9bff8a0 100644 --- a/net/ipv4/tcp_cubic.c +++ b/net/ipv4/tcp_cubic.c @@ -153,6 +153,27 @@ static void bictcp_init(struct sock *sk) tcp_sk(sk)->snd_ssthresh = initial_ssthresh; } +static void bictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event) +{ + if (event == CA_EVENT_TX_START) { + struct bictcp *ca = inet_csk_ca(sk); + u32 now = tcp_time_stamp; + s32 delta; + + delta = now - tcp_sk(sk)->lsndtime; + + /* We were application limited (idle) for a while. + * Shift epoch_start to keep cwnd growth to cubic curve. + */ + if (ca->epoch_start && delta > 0) { + ca->epoch_start += delta; + if (after(ca->epoch_start, now)) + ca->epoch_start = now; + } + return; + } +} + /* calculate the cubic root of x using a table lookup followed by one * Newton-Raphson iteration. * Avg err ~= 0.195% @@ -444,6 +465,20 @@ static struct tcp_congestion_ops cubictcp __read_mostly = { .name = "cubic", }; +void kpatch_load_cubictcp_cwnd_event(void) +{ + cubictcp.cwnd_event = bictcp_cwnd_event; +} + +void kpatch_unload_cubictcp_cwnd_event(void) +{ + cubictcp.cwnd_event = NULL; +} + +#include "kpatch-macros.h" +KPATCH_LOAD_HOOK(kpatch_load_cubictcp_cwnd_event); +KPATCH_UNLOAD_HOOK(kpatch_unload_cubictcp_cwnd_event); + static int __init cubictcp_register(void) { BUILD_BUG_ON(sizeof(struct bictcp) > ICSK_CA_PRIV_SIZE); kpatch-0.3.2/examples/tcp_cubic-better-follow-cubic-curve-original.patch000066400000000000000000000033011266116401600263520ustar00rootroot00000000000000This patch is for 3.10.x. It combines the following commits from the mainline: commit 30927520dbae297182990bb21d08762bcc35ce1d Author: Eric Dumazet Date: Wed Sep 9 21:55:07 2015 -0700 tcp_cubic: better follow cubic curve after idle period commit c2e7204d180f8efc80f27959ca9cf16fa17f67db Author: Eric Dumazet Date: Thu Sep 17 08:38:00 2015 -0700 tcp_cubic: do not set epoch_start in the future References: http://www.phoronix.com/scan.php?page=news_item&px=Google-Fixes-TCP-Linux diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c index 894b7ce..872b3a0 100644 --- a/net/ipv4/tcp_cubic.c +++ b/net/ipv4/tcp_cubic.c @@ -153,6 +153,27 @@ static void bictcp_init(struct sock *sk) tcp_sk(sk)->snd_ssthresh = initial_ssthresh; } +static void bictcp_cwnd_event(struct sock *sk, enum tcp_ca_event event) +{ + if (event == CA_EVENT_TX_START) { + struct bictcp *ca = inet_csk_ca(sk); + u32 now = tcp_time_stamp; + s32 delta; + + delta = now - tcp_sk(sk)->lsndtime; + + /* We were application limited (idle) for a while. + * Shift epoch_start to keep cwnd growth to cubic curve. + */ + if (ca->epoch_start && delta > 0) { + ca->epoch_start += delta; + if (after(ca->epoch_start, now)) + ca->epoch_start = now; + } + return; + } +} + /* calculate the cubic root of x using a table lookup followed by one * Newton-Raphson iteration. * Avg err ~= 0.195% @@ -439,6 +460,7 @@ static struct tcp_congestion_ops cubictcp __read_mostly = { .cong_avoid = bictcp_cong_avoid, .set_state = bictcp_state, .undo_cwnd = bictcp_undo_cwnd, + .cwnd_event = bictcp_cwnd_event, .pkts_acked = bictcp_acked, .owner = THIS_MODULE, .name = "cubic", kpatch-0.3.2/kmod/000077500000000000000000000000001266116401600137235ustar00rootroot00000000000000kpatch-0.3.2/kmod/Makefile000066400000000000000000000010171266116401600153620ustar00rootroot00000000000000include ../Makefile.inc all: clean ifeq ($(BUILDMOD),yes) $(MAKE) -C core endif install: ifeq ($(BUILDMOD),yes) $(INSTALL) -d $(MODULESDIR)/$(shell uname -r) $(INSTALL) -m 644 core/kpatch.ko $(MODULESDIR)/$(shell uname -r) $(INSTALL) -m 644 core/Module.symvers $(MODULESDIR)/$(shell uname -r) endif $(INSTALL) -d $(DATADIR)/patch $(INSTALL) -m 644 patch/* $(DATADIR)/patch uninstall: ifeq ($(BUILDMOD),yes) $(RM) -R $(MODULESDIR) endif $(RM) -R $(DATADIR) clean: ifeq ($(BUILDMOD),yes) $(MAKE) -C core clean endif kpatch-0.3.2/kmod/core/000077500000000000000000000000001266116401600146535ustar00rootroot00000000000000kpatch-0.3.2/kmod/core/Makefile000066400000000000000000000011711266116401600163130ustar00rootroot00000000000000# make rules KPATCH_BUILD ?= /lib/modules/$(shell uname -r)/build KERNELRELEASE := $(lastword $(subst /, , $(dir $(KPATCH_BUILD)))) THISDIR := $(abspath $(dir $(lastword $(MAKEFILE_LIST)))) ifeq ($(wildcard $(KPATCH_BUILD)),) $(error $(KPATCH_BUILD) doesn\'t exist. Try installing the kernel-devel-$(KERNELRELEASE) RPM or linux-headers-$(KERNELRELEASE) DEB.) endif KPATCH_MAKE = $(MAKE) -C $(KPATCH_BUILD) M=$(THISDIR) kpatch.ko: core.c $(KPATCH_MAKE) kpatch.ko all: kpatch.ko clean: $(RM) -Rf .*.o.cmd .*.ko.cmd .tmp_versions *.o *.ko *.mod.c \ Module.symvers # kbuild rules obj-m := kpatch.o kpatch-y := core.o shadow.o kpatch-0.3.2/kmod/core/core.c000066400000000000000000000660671266116401600157660ustar00rootroot00000000000000/* * Copyright (C) 2014 Seth Jennings * Copyright (C) 2013-2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, see . */ /* * kpatch core module * * Patch modules register with this module to redirect old functions to new * functions. * * For each function patched by the module we must: * - Call stop_machine * - Ensure that no task has the old function in its call stack * - Add the new function address to kpatch_func_hash * * After that, each call to the old function calls into kpatch_ftrace_handler() * which finds the new function in kpatch_func_hash table and updates the * return instruction pointer so that ftrace will return to the new function. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include #include #include #include #include #include #include #include #include #include #include "kpatch.h" #if !defined(CONFIG_FUNCTION_TRACER) || \ !defined(CONFIG_HAVE_FENTRY) || \ !defined(CONFIG_MODULES) || \ !defined(CONFIG_SYSFS) || \ !defined(CONFIG_KALLSYMS_ALL) #error "CONFIG_FUNCTION_TRACER, CONFIG_HAVE_FENTRY, CONFIG_MODULES, CONFIG_SYSFS, CONFIG_KALLSYMS_ALL kernel config options are required" #endif #define KPATCH_HASH_BITS 8 static DEFINE_HASHTABLE(kpatch_func_hash, KPATCH_HASH_BITS); static DEFINE_SEMAPHORE(kpatch_mutex); LIST_HEAD(kpmod_list); static int kpatch_num_patched; static struct kobject *kpatch_root_kobj; struct kobject *kpatch_patches_kobj; EXPORT_SYMBOL_GPL(kpatch_patches_kobj); struct kpatch_backtrace_args { struct kpatch_module *kpmod; int ret; }; struct kpatch_kallsyms_args { const char *name; unsigned long addr; }; /* this is a double loop, use goto instead of break */ #define do_for_each_linked_func(kpmod, func) { \ struct kpatch_object *_object; \ list_for_each_entry(_object, &kpmod->objects, list) { \ if (!kpatch_object_linked(_object)) \ continue; \ list_for_each_entry(func, &_object->funcs, list) { #define while_for_each_linked_func() \ } \ } \ } /* * The kpatch core module has a state machine which allows for proper * synchronization with kpatch_ftrace_handler() when it runs in NMI context. * * +-----------------------------------------------------+ * | | * | + * v +---> KPATCH_STATE_SUCCESS * KPATCH_STATE_IDLE +---> KPATCH_STATE_UPDATING | * ^ +---> KPATCH_STATE_FAILURE * | + * | | * +-----------------------------------------------------+ * * KPATCH_STATE_IDLE: No updates are pending. The func hash is valid, and the * reader doesn't need to check func->op. * * KPATCH_STATE_UPDATING: An update is in progress. The reader must call * kpatch_state_finish(KPATCH_STATE_FAILURE) before accessing the func hash. * * KPATCH_STATE_FAILURE: An update failed, and the func hash might be * inconsistent (pending patched funcs might not have been removed yet). If * func->op is KPATCH_OP_PATCH, then rollback to the previous version of the * func. * * KPATCH_STATE_SUCCESS: An update succeeded, but the func hash might be * inconsistent (pending unpatched funcs might not have been removed yet). If * func->op is KPATCH_OP_UNPATCH, then rollback to the previous version of the * func. */ enum { KPATCH_STATE_IDLE, KPATCH_STATE_UPDATING, KPATCH_STATE_SUCCESS, KPATCH_STATE_FAILURE, }; static atomic_t kpatch_state; static int (*kpatch_set_memory_rw)(unsigned long addr, int numpages); static int (*kpatch_set_memory_ro)(unsigned long addr, int numpages); static inline void kpatch_state_idle(void) { int state = atomic_read(&kpatch_state); WARN_ON(state != KPATCH_STATE_SUCCESS && state != KPATCH_STATE_FAILURE); atomic_set(&kpatch_state, KPATCH_STATE_IDLE); } static inline void kpatch_state_updating(void) { WARN_ON(atomic_read(&kpatch_state) != KPATCH_STATE_IDLE); atomic_set(&kpatch_state, KPATCH_STATE_UPDATING); } /* If state is updating, change it to success or failure and return new state */ static inline int kpatch_state_finish(int state) { int result; WARN_ON(state != KPATCH_STATE_SUCCESS && state != KPATCH_STATE_FAILURE); result = atomic_cmpxchg(&kpatch_state, KPATCH_STATE_UPDATING, state); return result == KPATCH_STATE_UPDATING ? state : result; } static struct kpatch_func *kpatch_get_func(unsigned long ip) { struct kpatch_func *f; /* Here, we have to use rcu safe hlist because of NMI concurrency */ hash_for_each_possible_rcu(kpatch_func_hash, f, node, ip) if (f->old_addr == ip) return f; return NULL; } static struct kpatch_func *kpatch_get_prev_func(struct kpatch_func *f, unsigned long ip) { hlist_for_each_entry_continue_rcu(f, node) if (f->old_addr == ip) return f; return NULL; } static inline bool kpatch_object_linked(struct kpatch_object *object) { return object->mod || !strcmp(object->name, "vmlinux"); } static inline int kpatch_compare_addresses(unsigned long stack_addr, unsigned long func_addr, unsigned long func_size, const char *func_name) { if (stack_addr >= func_addr && stack_addr < func_addr + func_size) { pr_err("activeness safety check failed for %s\n", func_name); return -EBUSY; } return 0; } static void kpatch_backtrace_address_verify(void *data, unsigned long address, int reliable) { struct kpatch_backtrace_args *args = data; struct kpatch_module *kpmod = args->kpmod; struct kpatch_func *func; int i; if (args->ret) return; /* check kpmod funcs */ do_for_each_linked_func(kpmod, func) { unsigned long func_addr, func_size; const char *func_name; struct kpatch_func *active_func; if (func->force) continue; active_func = kpatch_get_func(func->old_addr); if (!active_func) { /* patching an unpatched func */ func_addr = func->old_addr; func_size = func->old_size; func_name = func->name; } else { /* repatching or unpatching */ func_addr = active_func->new_addr; func_size = active_func->new_size; func_name = active_func->name; } args->ret = kpatch_compare_addresses(address, func_addr, func_size, func_name); if (args->ret) return; } while_for_each_linked_func(); /* in the replace case, need to check the func hash as well */ hash_for_each_rcu(kpatch_func_hash, i, func, node) { if (func->op == KPATCH_OP_UNPATCH && !func->force) { args->ret = kpatch_compare_addresses(address, func->new_addr, func->new_size, func->name); if (args->ret) return; } } } static int kpatch_backtrace_stack(void *data, char *name) { return 0; } static const struct stacktrace_ops kpatch_backtrace_ops = { .address = kpatch_backtrace_address_verify, .stack = kpatch_backtrace_stack, .walk_stack = print_context_stack_bp, }; static int kpatch_print_trace_stack(void *data, char *name) { pr_cont(" <%s> ", name); return 0; } static void kpatch_print_trace_address(void *data, unsigned long addr, int reliable) { if (reliable) pr_info("[<%p>] %pB\n", (void *)addr, (void *)addr); } static const struct stacktrace_ops kpatch_print_trace_ops = { .stack = kpatch_print_trace_stack, .address = kpatch_print_trace_address, .walk_stack = print_context_stack, }; /* * Verify activeness safety, i.e. that none of the to-be-patched functions are * on the stack of any task. * * This function is called from stop_machine() context. */ static int kpatch_verify_activeness_safety(struct kpatch_module *kpmod) { struct task_struct *g, *t; int ret = 0; struct kpatch_backtrace_args args = { .kpmod = kpmod, .ret = 0 }; /* Check the stacks of all tasks. */ do_each_thread(g, t) { dump_trace(t, NULL, NULL, 0, &kpatch_backtrace_ops, &args); if (args.ret) { ret = args.ret; pr_info("PID: %d Comm: %.20s\n", t->pid, t->comm); dump_trace(t, NULL, (unsigned long *)t->thread.sp, 0, &kpatch_print_trace_ops, NULL); goto out; } } while_each_thread(g, t); out: return ret; } /* Called from stop_machine */ static int kpatch_apply_patch(void *data) { struct kpatch_module *kpmod = data; struct kpatch_func *func; struct kpatch_hook *hook; struct kpatch_object *object; int ret; ret = kpatch_verify_activeness_safety(kpmod); if (ret) { kpatch_state_finish(KPATCH_STATE_FAILURE); return ret; } /* tentatively add the new funcs to the global func hash */ do_for_each_linked_func(kpmod, func) { hash_add_rcu(kpatch_func_hash, &func->node, func->old_addr); } while_for_each_linked_func(); /* memory barrier between func hash add and state change */ smp_wmb(); /* * Check if any inconsistent NMI has happened while updating. If not, * move to success state. */ ret = kpatch_state_finish(KPATCH_STATE_SUCCESS); if (ret == KPATCH_STATE_FAILURE) { pr_err("NMI activeness safety check failed\n"); /* Failed, we have to rollback patching process */ do_for_each_linked_func(kpmod, func) { hash_del_rcu(&func->node); } while_for_each_linked_func(); return -EBUSY; } /* run any user-defined load hooks */ list_for_each_entry(object, &kpmod->objects, list) { if (!kpatch_object_linked(object)) continue; list_for_each_entry(hook, &object->hooks_load, list) (*hook->hook)(); } return 0; } /* Called from stop_machine */ static int kpatch_remove_patch(void *data) { struct kpatch_module *kpmod = data; struct kpatch_func *func; struct kpatch_hook *hook; struct kpatch_object *object; int ret; ret = kpatch_verify_activeness_safety(kpmod); if (ret) { kpatch_state_finish(KPATCH_STATE_FAILURE); return ret; } /* Check if any inconsistent NMI has happened while updating */ ret = kpatch_state_finish(KPATCH_STATE_SUCCESS); if (ret == KPATCH_STATE_FAILURE) return -EBUSY; /* Succeeded, remove all updating funcs from hash table */ do_for_each_linked_func(kpmod, func) { hash_del_rcu(&func->node); } while_for_each_linked_func(); /* run any user-defined unload hooks */ list_for_each_entry(object, &kpmod->objects, list) { if (!kpatch_object_linked(object)) continue; list_for_each_entry(hook, &object->hooks_unload, list) (*hook->hook)(); } return 0; } /* * This is where the magic happens. Update regs->ip to tell ftrace to return * to the new function. * * If there are multiple patch modules that have registered to patch the same * function, the last one to register wins, as it'll be first in the hash * bucket. */ static void notrace kpatch_ftrace_handler(unsigned long ip, unsigned long parent_ip, struct ftrace_ops *fops, struct pt_regs *regs) { struct kpatch_func *func; int state; preempt_disable_notrace(); if (likely(!in_nmi())) func = kpatch_get_func(ip); else { /* Checking for NMI inconsistency */ state = kpatch_state_finish(KPATCH_STATE_FAILURE); /* no memory reordering between state and func hash read */ smp_rmb(); func = kpatch_get_func(ip); if (likely(state == KPATCH_STATE_IDLE)) goto done; if (state == KPATCH_STATE_SUCCESS) { /* * Patching succeeded. If the function was being * unpatched, roll back to the previous version. */ if (func && func->op == KPATCH_OP_UNPATCH) func = kpatch_get_prev_func(func, ip); } else { /* * Patching failed. If the function was being patched, * roll back to the previous version. */ if (func && func->op == KPATCH_OP_PATCH) func = kpatch_get_prev_func(func, ip); } } done: if (func) regs->ip = func->new_addr + MCOUNT_INSN_SIZE; preempt_enable_notrace(); } static struct ftrace_ops kpatch_ftrace_ops __read_mostly = { .func = kpatch_ftrace_handler, .flags = FTRACE_OPS_FL_SAVE_REGS, }; static int kpatch_ftrace_add_func(unsigned long ip) { int ret; /* check if any other patch modules have also patched this func */ if (kpatch_get_func(ip)) return 0; ret = ftrace_set_filter_ip(&kpatch_ftrace_ops, ip, 0, 0); if (ret) { pr_err("can't set ftrace filter at address 0x%lx\n", ip); return ret; } if (!kpatch_num_patched) { ret = register_ftrace_function(&kpatch_ftrace_ops); if (ret) { pr_err("can't register ftrace handler\n"); ftrace_set_filter_ip(&kpatch_ftrace_ops, ip, 1, 0); return ret; } } kpatch_num_patched++; return 0; } static int kpatch_ftrace_remove_func(unsigned long ip) { int ret; /* check if any other patch modules have also patched this func */ if (kpatch_get_func(ip)) return 0; if (kpatch_num_patched == 1) { ret = unregister_ftrace_function(&kpatch_ftrace_ops); if (ret) { pr_err("can't unregister ftrace handler\n"); return ret; } } kpatch_num_patched--; ret = ftrace_set_filter_ip(&kpatch_ftrace_ops, ip, 1, 0); if (ret) { pr_err("can't remove ftrace filter at address 0x%lx\n", ip); return ret; } return 0; } static int kpatch_kallsyms_callback(void *data, const char *name, struct module *mod, unsigned long addr) { struct kpatch_kallsyms_args *args = data; if (args->addr == addr && !strcmp(args->name, name)) return 1; return 0; } static int kpatch_verify_symbol_match(const char *name, unsigned long addr) { int ret; struct kpatch_kallsyms_args args = { .name = name, .addr = addr, }; ret = kallsyms_on_each_symbol(kpatch_kallsyms_callback, &args); if (!ret) { pr_err("base kernel mismatch for symbol '%s'\n", name); pr_err("expected address was 0x%016lx\n", addr); return -EINVAL; } return 0; } static unsigned long kpatch_find_module_symbol(struct module *mod, const char *name) { char buf[KSYM_SYMBOL_LEN]; /* check total string length for overrun */ if (strlen(mod->name) + strlen(name) + 1 >= KSYM_SYMBOL_LEN) { pr_err("buffer overrun finding symbol '%s' in module '%s'\n", name, mod->name); return 0; } /* encode symbol name as "mod->name:name" */ strcpy(buf, mod->name); strcat(buf, ":"); strcat(buf, name); return kallsyms_lookup_name(buf); } /* * External symbols are located outside the parent object (where the parent * object is either vmlinux or the kmod being patched). */ static unsigned long kpatch_find_external_symbol(struct kpatch_module *kpmod, const char *name) { const struct kernel_symbol *sym; /* first, check if it's an exported symbol */ preempt_disable(); sym = find_symbol(name, NULL, NULL, true, true); preempt_enable(); if (sym) return sym->value; /* otherwise check if it's in another .o within the patch module */ return kpatch_find_module_symbol(kpmod->mod, name); } static int kpatch_write_relocations(struct kpatch_module *kpmod, struct kpatch_object *object) { int ret, size, readonly = 0, numpages; struct kpatch_dynrela *dynrela; u64 loc, val; #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) unsigned long core = (unsigned long)kpmod->mod->core_layout.base; unsigned long core_size = kpmod->mod->core_layout.size; #else unsigned long core = (unsigned long)kpmod->mod->module_core; unsigned long core_size = kpmod->mod->core_size; #endif unsigned long src; list_for_each_entry(dynrela, &object->dynrelas, list) { if (!strcmp(object->name, "vmlinux")) { ret = kpatch_verify_symbol_match(dynrela->name, dynrela->src); if (ret) return ret; } else { /* module, dynrela->src needs to be discovered */ if (dynrela->external) src = kpatch_find_external_symbol(kpmod, dynrela->name); else src = kpatch_find_module_symbol(object->mod, dynrela->name); if (!src) { pr_err("unable to find symbol '%s'\n", dynrela->name); return -EINVAL; } dynrela->src = src; } switch (dynrela->type) { case R_X86_64_NONE: continue; case R_X86_64_PC32: loc = dynrela->dest; val = (u32)(dynrela->src + dynrela->addend - dynrela->dest); size = 4; break; case R_X86_64_32S: loc = dynrela->dest; val = (s32)dynrela->src + dynrela->addend; size = 4; break; case R_X86_64_64: loc = dynrela->dest; val = dynrela->src; size = 8; break; default: pr_err("unsupported rela type %ld for source %s (0x%lx <- 0x%lx)\n", dynrela->type, dynrela->name, dynrela->dest, dynrela->src); return -EINVAL; } if (loc < core || loc >= core + core_size) { pr_err("bad dynrela location 0x%llx for symbol %s\n", loc, dynrela->name); return -EINVAL; } #ifdef CONFIG_DEBUG_SET_MODULE_RONX #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) if (loc < core + kpmod->mod->core_layout.ro_size) #else if (loc < core + kpmod->mod->core_ro_size) #endif readonly = 1; #endif numpages = (PAGE_SIZE - (loc & ~PAGE_MASK) >= size) ? 1 : 2; if (readonly) kpatch_set_memory_rw(loc & PAGE_MASK, numpages); ret = probe_kernel_write((void *)loc, &val, size); if (readonly) kpatch_set_memory_ro(loc & PAGE_MASK, numpages); if (ret) { pr_err("write to 0x%llx failed for symbol %s\n", loc, dynrela->name); return ret; } } return 0; } static int kpatch_unlink_object(struct kpatch_object *object) { struct kpatch_func *func; int ret; list_for_each_entry(func, &object->funcs, list) { if (!func->old_addr) continue; ret = kpatch_ftrace_remove_func(func->old_addr); if (ret) { WARN(1, "can't unregister ftrace for address 0x%lx\n", func->old_addr); return ret; } } if (object->mod) module_put(object->mod); return 0; } /* * Link to a to-be-patched object in preparation for patching it. * * - Find the object module * - Write patch module relocations which reference the object * - Calculate the patched functions' addresses * - Register them with ftrace */ static int kpatch_link_object(struct kpatch_module *kpmod, struct kpatch_object *object) { struct module *mod = NULL; struct kpatch_func *func, *func_err = NULL; int ret; bool vmlinux = !strcmp(object->name, "vmlinux"); if (!vmlinux) { mutex_lock(&module_mutex); mod = find_module(object->name); if (!mod) { /* * The module hasn't been loaded yet. We can patch it * later in kpatch_module_notify(). */ mutex_unlock(&module_mutex); return 0; } /* should never fail because we have the mutex */ WARN_ON(!try_module_get(mod)); mutex_unlock(&module_mutex); object->mod = mod; } ret = kpatch_write_relocations(kpmod, object); if (ret) goto err_put; list_for_each_entry(func, &object->funcs, list) { /* calculate actual old location */ if (vmlinux) { ret = kpatch_verify_symbol_match(func->name, func->old_addr); if (ret) { func_err = func; goto err_ftrace; } } else { unsigned long old_addr; old_addr = kpatch_find_module_symbol(mod, func->name); if (!old_addr) { pr_err("unable to find symbol '%s' in module '%s\n", func->name, mod->name); func_err = func; ret = -EINVAL; goto err_ftrace; } func->old_addr = old_addr; } /* add to ftrace filter and register handler if needed */ ret = kpatch_ftrace_add_func(func->old_addr); if (ret) { func_err = func; goto err_ftrace; } } return 0; err_ftrace: list_for_each_entry(func, &object->funcs, list) { if (func == func_err) break; WARN_ON(kpatch_ftrace_remove_func(func->old_addr)); } err_put: if (!vmlinux) module_put(mod); return ret; } static int kpatch_module_notify(struct notifier_block *nb, unsigned long action, void *data) { struct module *mod = data; struct kpatch_module *kpmod; struct kpatch_object *object; struct kpatch_func *func; struct kpatch_hook *hook; int ret = 0; bool found = false; if (action != MODULE_STATE_COMING) return 0; down(&kpatch_mutex); list_for_each_entry(kpmod, &kpmod_list, list) { list_for_each_entry(object, &kpmod->objects, list) { if (kpatch_object_linked(object)) continue; if (!strcmp(object->name, mod->name)) { found = true; goto done; } } } done: if (!found) goto out; ret = kpatch_link_object(kpmod, object); if (ret) goto out; BUG_ON(!object->mod); pr_notice("patching newly loaded module '%s'\n", object->name); /* run any user-defined load hooks */ list_for_each_entry(hook, &object->hooks_load, list) (*hook->hook)(); /* add to the global func hash */ list_for_each_entry(func, &object->funcs, list) hash_add_rcu(kpatch_func_hash, &func->node, func->old_addr); out: up(&kpatch_mutex); /* no way to stop the module load on error */ WARN(ret, "error (%d) patching newly loaded module '%s'\n", ret, object->name); return 0; } int kpatch_register(struct kpatch_module *kpmod, bool replace) { int ret, i, force = 0; struct kpatch_object *object, *object_err = NULL; struct kpatch_func *func; if (!kpmod->mod || list_empty(&kpmod->objects)) return -EINVAL; down(&kpatch_mutex); if (kpmod->enabled) { ret = -EINVAL; goto err_up; } list_add_tail(&kpmod->list, &kpmod_list); if (!try_module_get(kpmod->mod)) { ret = -ENODEV; goto err_list; } list_for_each_entry(object, &kpmod->objects, list) { ret = kpatch_link_object(kpmod, object); if (ret) { object_err = object; goto err_unlink; } if (!kpatch_object_linked(object)) { pr_notice("delaying patch of unloaded module '%s'\n", object->name); continue; } if (strcmp(object->name, "vmlinux")) pr_notice("patching module '%s'\n", object->name); list_for_each_entry(func, &object->funcs, list) func->op = KPATCH_OP_PATCH; } if (replace) hash_for_each_rcu(kpatch_func_hash, i, func, node) func->op = KPATCH_OP_UNPATCH; /* memory barrier between func hash and state write */ smp_wmb(); kpatch_state_updating(); /* * Idle the CPUs, verify activeness safety, and atomically make the new * functions visible to the ftrace handler. */ ret = stop_machine(kpatch_apply_patch, kpmod, NULL); /* * For the replace case, remove any obsolete funcs from the hash and * the ftrace filter, and disable the owning patch module so that it * can be removed. */ if (!ret && replace) { struct kpatch_module *kpmod2, *safe; hash_for_each_rcu(kpatch_func_hash, i, func, node) { if (func->op != KPATCH_OP_UNPATCH) continue; if (func->force) force = 1; hash_del_rcu(&func->node); WARN_ON(kpatch_ftrace_remove_func(func->old_addr)); } list_for_each_entry_safe(kpmod2, safe, &kpmod_list, list) { if (kpmod == kpmod2) continue; kpmod2->enabled = false; pr_notice("unloaded patch module '%s'\n", kpmod2->mod->name); /* * Don't allow modules with forced functions to be * removed because they might still be in use. */ if (!force) module_put(kpmod2->mod); list_del(&kpmod2->list); } } /* memory barrier between func hash and state write */ smp_wmb(); /* NMI handlers can return to normal now */ kpatch_state_idle(); /* * Wait for all existing NMI handlers to complete so that they don't * see any changes to funcs or funcs->op that might occur after this * point. * * Any NMI handlers starting after this point will see the IDLE state. */ synchronize_rcu(); if (ret) goto err_ops; do_for_each_linked_func(kpmod, func) { func->op = KPATCH_OP_NONE; } while_for_each_linked_func(); /* TODO: need TAINT_KPATCH */ pr_notice_once("tainting kernel with TAINT_USER\n"); add_taint(TAINT_USER, LOCKDEP_STILL_OK); pr_notice("loaded patch module '%s'\n", kpmod->mod->name); kpmod->enabled = true; up(&kpatch_mutex); return 0; err_ops: if (replace) hash_for_each_rcu(kpatch_func_hash, i, func, node) func->op = KPATCH_OP_NONE; err_unlink: list_for_each_entry(object, &kpmod->objects, list) { if (object == object_err) break; if (!kpatch_object_linked(object)) continue; WARN_ON(kpatch_unlink_object(object)); } module_put(kpmod->mod); err_list: list_del(&kpmod->list); err_up: up(&kpatch_mutex); return ret; } EXPORT_SYMBOL(kpatch_register); int kpatch_unregister(struct kpatch_module *kpmod) { struct kpatch_object *object; struct kpatch_func *func; int ret, force = 0; down(&kpatch_mutex); if (!kpmod->enabled) { ret = -EINVAL; goto out; } do_for_each_linked_func(kpmod, func) { func->op = KPATCH_OP_UNPATCH; if (func->force) force = 1; } while_for_each_linked_func(); /* memory barrier between func hash and state write */ smp_wmb(); kpatch_state_updating(); ret = stop_machine(kpatch_remove_patch, kpmod, NULL); /* NMI handlers can return to normal now */ kpatch_state_idle(); /* * Wait for all existing NMI handlers to complete so that they don't * see any changes to funcs or funcs->op that might occur after this * point. * * Any NMI handlers starting after this point will see the IDLE state. */ synchronize_rcu(); if (ret) { do_for_each_linked_func(kpmod, func) { func->op = KPATCH_OP_NONE; } while_for_each_linked_func(); goto out; } list_for_each_entry(object, &kpmod->objects, list) { if (!kpatch_object_linked(object)) continue; ret = kpatch_unlink_object(object); if (ret) goto out; } pr_notice("unloaded patch module '%s'\n", kpmod->mod->name); kpmod->enabled = false; /* * Don't allow modules with forced functions to be removed because they * might still be in use. */ if (!force) module_put(kpmod->mod); list_del(&kpmod->list); out: up(&kpatch_mutex); return ret; } EXPORT_SYMBOL(kpatch_unregister); static struct notifier_block kpatch_module_nb = { .notifier_call = kpatch_module_notify, .priority = INT_MIN, /* called last */ }; static int kpatch_init(void) { int ret; kpatch_set_memory_rw = (void *)kallsyms_lookup_name("set_memory_rw"); if (!kpatch_set_memory_rw) { pr_err("can't find set_memory_rw symbol\n"); return -ENXIO; } kpatch_set_memory_ro = (void *)kallsyms_lookup_name("set_memory_ro"); if (!kpatch_set_memory_ro) { pr_err("can't find set_memory_ro symbol\n"); return -ENXIO; } kpatch_root_kobj = kobject_create_and_add("kpatch", kernel_kobj); if (!kpatch_root_kobj) return -ENOMEM; kpatch_patches_kobj = kobject_create_and_add("patches", kpatch_root_kobj); if (!kpatch_patches_kobj) { ret = -ENOMEM; goto err_root_kobj; } ret = register_module_notifier(&kpatch_module_nb); if (ret) goto err_patches_kobj; return 0; err_patches_kobj: kobject_put(kpatch_patches_kobj); err_root_kobj: kobject_put(kpatch_root_kobj); return ret; } static void kpatch_exit(void) { rcu_barrier(); WARN_ON(kpatch_num_patched != 0); WARN_ON(unregister_module_notifier(&kpatch_module_nb)); kobject_put(kpatch_patches_kobj); kobject_put(kpatch_root_kobj); } module_init(kpatch_init); module_exit(kpatch_exit); MODULE_LICENSE("GPL"); kpatch-0.3.2/kmod/core/kpatch.h000066400000000000000000000043771266116401600163110ustar00rootroot00000000000000/* * kpatch.h * * Copyright (C) 2014 Seth Jennings * Copyright (C) 2013-2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, see . * * Contains the API for the core kpatch module used by the patch modules */ #ifndef _KPATCH_H_ #define _KPATCH_H_ #include #include enum kpatch_op { KPATCH_OP_NONE, KPATCH_OP_PATCH, KPATCH_OP_UNPATCH, }; struct kpatch_func { /* public */ unsigned long new_addr; unsigned long new_size; unsigned long old_addr; unsigned long old_size; const char *name; struct list_head list; int force; /* private */ struct hlist_node node; enum kpatch_op op; }; struct kpatch_dynrela { unsigned long dest; unsigned long src; unsigned long type; const char *name; int addend; int external; struct list_head list; }; struct kpatch_hook { struct list_head list; void (*hook)(void); }; struct kpatch_object { struct list_head list; const char *name; struct list_head funcs; struct list_head dynrelas; struct list_head hooks_load; struct list_head hooks_unload; /* private */ struct module *mod; }; struct kpatch_module { /* public */ struct module *mod; struct list_head objects; /* public read-only */ bool enabled; /* private */ struct list_head list; }; extern struct kobject *kpatch_patches_kobj; extern int kpatch_register(struct kpatch_module *kpmod, bool replace); extern int kpatch_unregister(struct kpatch_module *kpmod); extern void *kpatch_shadow_alloc(void *obj, char *var, size_t size, gfp_t gfp); extern void kpatch_shadow_free(void *obj, char *var); extern void *kpatch_shadow_get(void *obj, char *var); #endif /* _KPATCH_H_ */ kpatch-0.3.2/kmod/core/shadow.c000066400000000000000000000104351266116401600163070ustar00rootroot00000000000000/* * Copyright (C) 2014 Josh Poimboeuf * Copyright (C) 2014 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, see . */ /* * kpatch shadow variables * * These functions can be used to add new "shadow" fields to existing data * structures. For example, to allocate a "newpid" variable associated with an * instance of task_struct, and assign it a value of 1000: * * struct task_struct *tsk = current; * int *newpid; * newpid = kpatch_shadow_alloc(tsk, "newpid", sizeof(int), GFP_KERNEL); * if (newpid) * *newpid = 1000; * * To retrieve a pointer to the variable: * * struct task_struct *tsk = current; * int *newpid; * newpid = kpatch_shadow_get(tsk, "newpid"); * if (newpid) * printk("task newpid = %d\n", *newpid); // prints "task newpid = 1000" * * To free it: * * kpatch_shadow_free(tsk, "newpid"); */ #include #include #include "kpatch.h" static DEFINE_HASHTABLE(kpatch_shadow_hash, 12); static DEFINE_SPINLOCK(kpatch_shadow_lock); struct kpatch_shadow { struct hlist_node node; struct rcu_head rcu_head; void *obj; union { char *var; /* assumed to be 4-byte aligned */ unsigned long flags; }; void *data; }; #define SHADOW_FLAG_INPLACE 0x1 #define SHADOW_FLAG_RESERVED0 0x2 /* reserved for future use */ #define SHADOW_FLAG_MASK 0x3 #define SHADOW_PTR_MASK (~(SHADOW_FLAG_MASK)) static inline void shadow_set_inplace(struct kpatch_shadow *shadow) { shadow->flags |= SHADOW_FLAG_INPLACE; } static inline int shadow_is_inplace(struct kpatch_shadow *shadow) { return shadow->flags & SHADOW_FLAG_INPLACE; } static inline char *shadow_var(struct kpatch_shadow *shadow) { return (char *)((unsigned long)shadow->var & SHADOW_PTR_MASK); } void *kpatch_shadow_alloc(void *obj, char *var, size_t size, gfp_t gfp) { unsigned long flags; struct kpatch_shadow *shadow; shadow = kmalloc(sizeof(*shadow), gfp); if (!shadow) return NULL; shadow->obj = obj; shadow->var = kstrdup(var, gfp); if (!shadow->var) return NULL; if (size <= sizeof(shadow->data)) { shadow->data = &shadow->data; shadow_set_inplace(shadow); } else { shadow->data = kmalloc(size, gfp); if (!shadow->data) { kfree(shadow->var); return NULL; } } spin_lock_irqsave(&kpatch_shadow_lock, flags); hash_add_rcu(kpatch_shadow_hash, &shadow->node, (unsigned long)obj); spin_unlock_irqrestore(&kpatch_shadow_lock, flags); return shadow->data; } EXPORT_SYMBOL_GPL(kpatch_shadow_alloc); static void kpatch_shadow_rcu_free(struct rcu_head *head) { struct kpatch_shadow *shadow; shadow = container_of(head, struct kpatch_shadow, rcu_head); if (!shadow_is_inplace(shadow)) kfree(shadow->data); kfree(shadow_var(shadow)); kfree(shadow); } void kpatch_shadow_free(void *obj, char *var) { unsigned long flags; struct kpatch_shadow *shadow; spin_lock_irqsave(&kpatch_shadow_lock, flags); hash_for_each_possible(kpatch_shadow_hash, shadow, node, (unsigned long)obj) { if (shadow->obj == obj && !strcmp(shadow_var(shadow), var)) { hash_del_rcu(&shadow->node); spin_unlock_irqrestore(&kpatch_shadow_lock, flags); call_rcu(&shadow->rcu_head, kpatch_shadow_rcu_free); return; } } spin_unlock_irqrestore(&kpatch_shadow_lock, flags); } EXPORT_SYMBOL_GPL(kpatch_shadow_free); void *kpatch_shadow_get(void *obj, char *var) { struct kpatch_shadow *shadow; rcu_read_lock(); hash_for_each_possible_rcu(kpatch_shadow_hash, shadow, node, (unsigned long)obj) { if (shadow->obj == obj && !strcmp(shadow_var(shadow), var)) { rcu_read_unlock(); if (shadow_is_inplace(shadow)) return &(shadow->data); return shadow->data; } } rcu_read_unlock(); return NULL; } EXPORT_SYMBOL_GPL(kpatch_shadow_get); kpatch-0.3.2/kmod/patch/000077500000000000000000000000001266116401600150225ustar00rootroot00000000000000kpatch-0.3.2/kmod/patch/Makefile000066400000000000000000000007741266116401600164720ustar00rootroot00000000000000KPATCH_NAME ?= patch KPATCH_BUILD ?= /lib/modules/$(shell uname -r)/build KPATCH_MAKE = $(MAKE) -C $(KPATCH_BUILD) M=$(PWD) obj-m += kpatch-$(KPATCH_NAME).o kpatch-$(KPATCH_NAME)-objs += patch-hook.o kpatch.lds output.o all: kpatch-$(KPATCH_NAME).ko kpatch-$(KPATCH_NAME).ko: $(KPATCH_MAKE) kpatch-$(KPATCH_NAME).ko patch-hook.o: patch-hook.c kpatch-patch-hook.c livepatch-patch-hook.c $(KPATCH_MAKE) patch-hook.o clean: $(RM) -Rf .*.o.cmd .*.ko.cmd .tmp_versions *.o *.ko *.mod.c \ Module.symvers kpatch-0.3.2/kmod/patch/kpatch-macros.h000066400000000000000000000074251266116401600177370ustar00rootroot00000000000000#ifndef __KPATCH_MACROS_H_ #define __KPATCH_MACROS_H_ #include #include typedef void (*kpatch_loadcall_t)(void); typedef void (*kpatch_unloadcall_t)(void); struct kpatch_load { kpatch_loadcall_t fn; char *objname; /* filled in by create-diff-object */ }; struct kpatch_unload { kpatch_unloadcall_t fn; char *objname; /* filled in by create-diff-object */ }; /* * KPATCH_IGNORE_SECTION macro * * This macro is for ignoring sections that may change as a side effect of * another change or might be a non-bundlable section; that is one that does * not honor -ffunction-section and create a one-to-one relation from function * symbol to section. */ #define KPATCH_IGNORE_SECTION(_sec) \ char *__UNIQUE_ID(kpatch_ignore_section_) __section(.kpatch.ignore.sections) = _sec; /* * KPATCH_IGNORE_FUNCTION macro * * This macro is for ignoring functions that may change as a side effect of a * change in another function. The WARN class of macros, for example, embed * the line number in an instruction, which will cause the function to be * detected as changed when, in fact, there has been no functional change. */ #define KPATCH_IGNORE_FUNCTION(_fn) \ void *__kpatch_ignore_func_##_fn __section(.kpatch.ignore.functions) = _fn; /* * KPATCH_LOAD_HOOK macro * * The first line only ensures that the hook being registered has the required * function signature. If not, there is compile error on this line. * * The section line declares a struct kpatch_load to be allocated in a new * .kpatch.hook.load section. This kpatch_load_data symbol is later stripped * by create-diff-object so that it can be declared in multiple objects that * are later linked together, avoiding global symbol collision. Since multiple * hooks can be registered, the .kpatch.hook.load section is a table of struct * kpatch_load elements that will be executed in series by the kpatch core * module at load time, assuming the kernel object (module) is currently * loaded; otherwise, the hook is called when module to be patched is loaded * via the module load notifier. */ #define KPATCH_LOAD_HOOK(_fn) \ static inline kpatch_loadcall_t __loadtest(void) { return _fn; } \ struct kpatch_load kpatch_load_data __section(.kpatch.hooks.load) = { \ .fn = _fn, \ .objname = NULL \ }; /* * KPATCH_UNLOAD_HOOK macro * * Same as LOAD hook with s/load/unload/ */ #define KPATCH_UNLOAD_HOOK(_fn) \ static inline kpatch_unloadcall_t __unloadtest(void) { return _fn; } \ struct kpatch_unload kpatch_unload_data __section(.kpatch.hooks.unload) = { \ .fn = _fn, \ .objname = NULL \ }; /* * KPATCH_FORCE_UNSAFE macro * * USE WITH EXTREME CAUTION! * * Allows patch authors to bypass the activeness safety check at patch load * time. Do this ONLY IF 1) the patch application will always/likely fail due * to the function being on the stack of at least one thread at all times and * 2) it is safe for both the original and patched versions of the function to * run concurrently. */ #define KPATCH_FORCE_UNSAFE(_fn) \ void *__kpatch_force_func_##_fn __section(.kpatch.force) = _fn; /* * KPATCH_PRINTK macro * * Use this instead of calling printk to avoid unwanted compiler optimizations * which cause kpatch-build errors. * * The printk function is annotated with the __cold attribute, which tells gcc * that the function is unlikely to be called. A side effect of this is that * code paths containing calls to printk might also be marked cold, leading to * other functions called in those code paths getting moved into .text.unlikely * or being uninlined. * * This macro places printk in its own code path so as not to make the * surrounding code path cold. */ #define KPATCH_PRINTK(_fmt, ...) \ ({ \ if (jiffies) \ printk(_fmt, ## __VA_ARGS__); \ }) #endif /* __KPATCH_MACROS_H_ */ kpatch-0.3.2/kmod/patch/kpatch-patch-hook.c000066400000000000000000000243601266116401600205000ustar00rootroot00000000000000/* * Copyright (C) 2013-2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include #include #include #include "kpatch.h" #include "kpatch-patch.h" static bool replace; module_param(replace, bool, S_IRUGO); MODULE_PARM_DESC(replace, "replace all previously loaded patch modules"); extern struct kpatch_patch_func __kpatch_funcs[], __kpatch_funcs_end[]; extern struct kpatch_patch_dynrela __kpatch_dynrelas[], __kpatch_dynrelas_end[]; extern struct kpatch_patch_hook __kpatch_hooks_load[], __kpatch_hooks_load_end[]; extern struct kpatch_patch_hook __kpatch_hooks_unload[], __kpatch_hooks_unload_end[]; extern unsigned long __kpatch_force_funcs[], __kpatch_force_funcs_end[]; extern char __kpatch_checksum[]; static struct kpatch_module kpmod; static struct kobject *patch_kobj; static struct kobject *patch_funcs_kobj; struct patch_func_obj { struct kobject kobj; struct kpatch_func *func; }; static struct patch_func_obj **patch_func_objs = NULL; static ssize_t patch_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return sprintf(buf, "%d\n", kpmod.enabled); } static ssize_t patch_enabled_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { int ret; unsigned long val; ret = kstrtoul(buf, 10, &val); if (ret) return ret; val = !!val; if (val) ret = kpatch_register(&kpmod, replace); else ret = kpatch_unregister(&kpmod); if (ret) return ret; return count; } static ssize_t patch_checksum_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return snprintf(buf, PAGE_SIZE, "%s\n", __kpatch_checksum); } static struct kobj_attribute patch_enabled_attr = __ATTR(enabled, 0644, patch_enabled_show, patch_enabled_store); static struct kobj_attribute patch_checksum_attr = __ATTR(checksum, 0444, patch_checksum_show, NULL); static struct attribute *patch_attrs[] = { &patch_enabled_attr.attr, &patch_checksum_attr.attr, NULL, }; static struct attribute_group patch_attr_group = { .attrs = patch_attrs, }; static ssize_t patch_func_old_addr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { struct patch_func_obj *func = container_of(kobj, struct patch_func_obj, kobj); return sprintf(buf, "0x%lx\n", func->func->old_addr); } static ssize_t patch_func_new_addr_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { struct patch_func_obj *func = container_of(kobj, struct patch_func_obj, kobj); return sprintf(buf, "0x%lx\n", func->func->new_addr); } static struct kobj_attribute patch_old_addr_attr = __ATTR(old_addr, S_IRUSR, patch_func_old_addr_show, NULL); static struct kobj_attribute patch_new_addr_attr = __ATTR(new_addr, S_IRUSR, patch_func_new_addr_show, NULL); static void patch_func_kobj_free(struct kobject *kobj) { struct patch_func_obj *func = container_of(kobj, struct patch_func_obj, kobj); kfree(func); } static struct attribute *patch_func_kobj_attrs[] = { &patch_old_addr_attr.attr, &patch_new_addr_attr.attr, NULL, }; static ssize_t patch_func_kobj_show(struct kobject *kobj, struct attribute *attr, char *buf) { struct kobj_attribute *func_attr = container_of(attr, struct kobj_attribute, attr); return func_attr->show(kobj, func_attr, buf); } static const struct sysfs_ops patch_func_sysfs_ops = { .show = patch_func_kobj_show, }; static struct kobj_type patch_func_ktype = { .release = patch_func_kobj_free, .sysfs_ops = &patch_func_sysfs_ops, .default_attrs = patch_func_kobj_attrs, }; static struct patch_func_obj *patch_func_kobj_alloc(void) { struct patch_func_obj *func; func = kzalloc(sizeof(*func), GFP_KERNEL); if (!func) return NULL; kobject_init(&func->kobj, &patch_func_ktype); return func; } static struct kpatch_object *patch_find_or_add_object(struct list_head *head, const char *name) { struct kpatch_object *object; list_for_each_entry(object, head, list) { if (!strcmp(object->name, name)) return object; } object = kzalloc(sizeof(*object), GFP_KERNEL); if (!object) return NULL; object->name = name; INIT_LIST_HEAD(&object->funcs); INIT_LIST_HEAD(&object->dynrelas); INIT_LIST_HEAD(&object->hooks_load); INIT_LIST_HEAD(&object->hooks_unload); list_add_tail(&object->list, head); return object; } static void patch_free_objects(void) { struct kpatch_object *object, *object_safe; struct kpatch_func *func, *func_safe; struct kpatch_dynrela *dynrela, *dynrela_safe; struct kpatch_hook *hook, *hook_safe; int i; if (!patch_func_objs) return; for (i = 0; i < __kpatch_funcs_end - __kpatch_funcs; i++) if (patch_func_objs[i]) kobject_put(&patch_func_objs[i]->kobj); kfree(patch_func_objs); list_for_each_entry_safe(object, object_safe, &kpmod.objects, list) { list_for_each_entry_safe(func, func_safe, &object->funcs, list) { list_del(&func->list); kfree(func); } list_for_each_entry_safe(dynrela, dynrela_safe, &object->dynrelas, list) { list_del(&dynrela->list); kfree(dynrela); } list_for_each_entry_safe(hook, hook_safe, &object->hooks_load, list) { list_del(&hook->list); kfree(hook); } list_for_each_entry_safe(hook, hook_safe, &object->hooks_unload, list) { list_del(&hook->list); kfree(hook); } list_del(&object->list); kfree(object); } } static int patch_is_func_forced(unsigned long addr) { unsigned long *a; for (a = __kpatch_force_funcs; a < __kpatch_force_funcs_end; a++) if (*a == addr) return 1; return 0; } static int patch_make_funcs_list(struct list_head *objects) { struct kpatch_object *object; struct kpatch_patch_func *p_func; struct kpatch_func *func; struct patch_func_obj *func_obj; int i = 0, funcs_nr, ret; funcs_nr = __kpatch_funcs_end - __kpatch_funcs; patch_func_objs = kzalloc(funcs_nr * sizeof(struct patch_func_obj*), GFP_KERNEL); if (!patch_func_objs) return -ENOMEM; for (p_func = __kpatch_funcs; p_func < __kpatch_funcs_end; p_func++) { object = patch_find_or_add_object(&kpmod.objects, p_func->objname); if (!object) return -ENOMEM; func = kzalloc(sizeof(*func), GFP_KERNEL); if (!func) return -ENOMEM; func->new_addr = p_func->new_addr; func->new_size = p_func->new_size; if (!strcmp("vmlinux", object->name)) func->old_addr = p_func->old_addr; else func->old_addr = 0x0; func->old_size = p_func->old_size; func->name = p_func->name; func->force = patch_is_func_forced(func->new_addr); list_add_tail(&func->list, &object->funcs); func_obj = patch_func_kobj_alloc(); if (!func_obj) return -ENOMEM; func_obj->func = func; patch_func_objs[i++] = func_obj; ret = kobject_add(&func_obj->kobj, patch_funcs_kobj, "%s", func->name); if (ret) return ret; } return 0; } static int patch_make_dynrelas_list(struct list_head *objects) { struct kpatch_object *object; struct kpatch_patch_dynrela *p_dynrela; struct kpatch_dynrela *dynrela; for (p_dynrela = __kpatch_dynrelas; p_dynrela < __kpatch_dynrelas_end; p_dynrela++) { object = patch_find_or_add_object(objects, p_dynrela->objname); if (!object) return -ENOMEM; dynrela = kzalloc(sizeof(*dynrela), GFP_KERNEL); if (!dynrela) return -ENOMEM; dynrela->dest = p_dynrela->dest; dynrela->src = p_dynrela->src; dynrela->type = p_dynrela->type; dynrela->name = p_dynrela->name; dynrela->external = p_dynrela->external; dynrela->addend = p_dynrela->addend; list_add_tail(&dynrela->list, &object->dynrelas); } return 0; } static int patch_make_hook_lists(struct list_head *objects) { struct kpatch_object *object; struct kpatch_patch_hook *p_hook; struct kpatch_hook *hook; for (p_hook = __kpatch_hooks_load; p_hook < __kpatch_hooks_load_end; p_hook++) { object = patch_find_or_add_object(objects, p_hook->objname); if (!object) return -ENOMEM; hook = kzalloc(sizeof(*hook), GFP_KERNEL); if (!hook) return -ENOMEM; hook->hook = p_hook->hook; list_add_tail(&hook->list, &object->hooks_load); } for (p_hook = __kpatch_hooks_unload; p_hook < __kpatch_hooks_unload_end; p_hook++) { object = patch_find_or_add_object(objects, p_hook->objname); if (!object) return -ENOMEM; hook = kzalloc(sizeof(*hook), GFP_KERNEL); if (!hook) return -ENOMEM; hook->hook = p_hook->hook; list_add_tail(&hook->list, &object->hooks_unload); } return 0; } static int __init patch_init(void) { int ret; patch_kobj = kobject_create_and_add(THIS_MODULE->name, kpatch_patches_kobj); if (!patch_kobj) return -ENOMEM; patch_funcs_kobj = kobject_create_and_add("functions", patch_kobj); if (!patch_funcs_kobj) { ret = -ENOMEM; goto err_patch; } kpmod.mod = THIS_MODULE; INIT_LIST_HEAD(&kpmod.objects); ret = patch_make_funcs_list(&kpmod.objects); if (ret) goto err_objects; ret = patch_make_dynrelas_list(&kpmod.objects); if (ret) goto err_objects; ret = patch_make_hook_lists(&kpmod.objects); if (ret) goto err_objects; ret = kpatch_register(&kpmod, replace); if (ret) goto err_objects; ret = sysfs_create_group(patch_kobj, &patch_attr_group); if (ret) goto err_sysfs; return 0; err_sysfs: kpatch_unregister(&kpmod); err_objects: patch_free_objects(); kobject_put(patch_funcs_kobj); err_patch: kobject_put(patch_kobj); return ret; } static void __exit patch_exit(void) { WARN_ON(kpmod.enabled); patch_free_objects(); kobject_put(patch_funcs_kobj); sysfs_remove_group(patch_kobj, &patch_attr_group); kobject_put(patch_kobj); } module_init(patch_init); module_exit(patch_exit); MODULE_LICENSE("GPL"); kpatch-0.3.2/kmod/patch/kpatch-patch.h000066400000000000000000000024431266116401600175450ustar00rootroot00000000000000/* * kpatch-patch.h * * Copyright (C) 2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, see . * * Contains the structs used for the patch module special sections */ #ifndef _KPATCH_PATCH_H_ #define _KPATCH_PATCH_H_ struct kpatch_patch_func { unsigned long new_addr; unsigned long new_size; unsigned long old_addr; unsigned long old_size; unsigned long sympos; char *name; char *objname; }; struct kpatch_patch_dynrela { unsigned long dest; unsigned long src; unsigned long type; unsigned long sympos; char *name; char *objname; int external; int addend; }; struct kpatch_patch_hook { void (*hook)(void); char *objname; }; #endif /* _KPATCH_PATCH_H_ */ kpatch-0.3.2/kmod/patch/kpatch.h000077700000000000000000000000001266116401600212262../core/kpatch.hustar00rootroot00000000000000kpatch-0.3.2/kmod/patch/kpatch.lds000066400000000000000000000020341266116401600167770ustar00rootroot00000000000000__kpatch_funcs = ADDR(.kpatch.funcs); __kpatch_funcs_end = ADDR(.kpatch.funcs) + SIZEOF(.kpatch.funcs); __kpatch_dynrelas = ADDR(.kpatch.dynrelas); __kpatch_dynrelas_end = ADDR(.kpatch.dynrelas) + SIZEOF(.kpatch.dynrelas); __kpatch_checksum = ADDR(.kpatch.checksum); SECTIONS { .kpatch.hooks.load : { __kpatch_hooks_load = . ; *(.kpatch.hooks.load) __kpatch_hooks_load_end = . ; /* * Pad the end of the section with zeros in case the section is empty. * This prevents the kernel from discarding the section at module * load time. __kpatch_hooks_load_end will still point to the end of * the section before the padding. If the .kpatch.hooks.load section * is empty, __kpatch_hooks_load equals __kpatch_hooks_load_end. */ QUAD(0); } .kpatch.hooks.unload : { __kpatch_hooks_unload = . ; *(.kpatch.hooks.unload) __kpatch_hooks_unload_end = . ; QUAD(0); } .kpatch.force : { __kpatch_force_funcs = . ; *(.kpatch.force) __kpatch_force_funcs_end = . ; QUAD(0); } } kpatch-0.3.2/kmod/patch/livepatch-patch-hook.c000066400000000000000000000173231266116401600212060ustar00rootroot00000000000000/* * Copyright (C) 2013-2014 Josh Poimboeuf * Copyright (C) 2014 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt #include #include #include #include #include #include #include "kpatch-patch.h" /* * There are quite a few similar structures at play in this file: * - livepatch.h structs prefixed with klp_* * - kpatch-patch.h structs prefixed with kpatch_patch_* * - local scaffolding structs prefixed with patch_* * * The naming of the struct variables follows this convention: * - livepatch struct being with "l" (e.g. lfunc) * - kpatch_patch structs being with "k" (e.g. kfunc) * - local scaffolding structs have no prefix (e.g. func) * * The program reads in kpatch_patch structures, arranges them into the * scaffold structures, then creates a livepatch structure suitable for * registration with the livepatch kernel API. The scaffold structs only * exist to allow the construction of the klp_patch struct. Once that is * done, the scaffold structs are no longer needed. */ struct klp_patch *lpatch; static LIST_HEAD(patch_objects); static int patch_objects_nr; struct patch_object { struct list_head list; struct list_head funcs; struct list_head relocs; const char *name; int funcs_nr, relocs_nr; }; struct patch_func { struct list_head list; struct kpatch_patch_func *kfunc; }; struct patch_reloc { struct list_head list; struct kpatch_patch_dynrela *kdynrela; }; static struct patch_object *patch_alloc_new_object(const char *name) { struct patch_object *object; object = kzalloc(sizeof(*object), GFP_KERNEL); if (!object) return NULL; INIT_LIST_HEAD(&object->funcs); INIT_LIST_HEAD(&object->relocs); if (strcmp(name, "vmlinux")) object->name = name; list_add(&object->list, &patch_objects); patch_objects_nr++; return object; } static struct patch_object *patch_find_object_by_name(const char *name) { struct patch_object *object; list_for_each_entry(object, &patch_objects, list) if ((!strcmp(name, "vmlinux") && !object->name) || (object->name && !strcmp(object->name, name))) return object; return patch_alloc_new_object(name); } static int patch_add_func_to_object(struct kpatch_patch_func *kfunc) { struct patch_func *func; struct patch_object *object; func = kzalloc(sizeof(*func), GFP_KERNEL); if (!func) return -ENOMEM; INIT_LIST_HEAD(&func->list); func->kfunc = kfunc; object = patch_find_object_by_name(kfunc->objname); if (!object) { kfree(func); return -ENOMEM; } list_add(&func->list, &object->funcs); object->funcs_nr++; return 0; } static int patch_add_reloc_to_object(struct kpatch_patch_dynrela *kdynrela) { struct patch_reloc *reloc; struct patch_object *object; reloc = kzalloc(sizeof(*reloc), GFP_KERNEL); if (!reloc) return -ENOMEM; INIT_LIST_HEAD(&reloc->list); reloc->kdynrela = kdynrela; object = patch_find_object_by_name(kdynrela->objname); if (!object) { kfree(reloc); return -ENOMEM; } list_add(&reloc->list, &object->relocs); object->relocs_nr++; return 0; } static void patch_free_scaffold(void) { struct patch_func *func, *safefunc; struct patch_reloc *reloc, *safereloc; struct patch_object *object, *safeobject; list_for_each_entry_safe(object, safeobject, &patch_objects, list) { list_for_each_entry_safe(func, safefunc, &object->funcs, list) { list_del(&func->list); kfree(func); } list_for_each_entry_safe(reloc, safereloc, &object->relocs, list) { list_del(&reloc->list); kfree(reloc); } list_del(&object->list); kfree(object); } } static void patch_free_livepatch(struct klp_patch *patch) { struct klp_object *object; if (patch) { for (object = patch->objs; object && object->funcs; object++) { if (object->funcs) kfree(object->funcs); if (object->relocs) kfree(object->relocs); } if (patch->objs) kfree(patch->objs); kfree(patch); } } extern struct kpatch_patch_func __kpatch_funcs[], __kpatch_funcs_end[]; extern struct kpatch_patch_dynrela __kpatch_dynrelas[], __kpatch_dynrelas_end[]; static int __init patch_init(void) { struct kpatch_patch_func *kfunc; struct kpatch_patch_dynrela *kdynrela; struct klp_object *lobjects, *lobject; struct klp_func *lfuncs, *lfunc; struct klp_reloc *lrelocs, *lreloc; struct patch_object *object; struct patch_func *func; struct patch_reloc *reloc; int ret = 0, i, j; /* organize functions and relocs by object in scaffold */ for (kfunc = __kpatch_funcs; kfunc != __kpatch_funcs_end; kfunc++) { ret = patch_add_func_to_object(kfunc); if (ret) goto out; } for (kdynrela = __kpatch_dynrelas; kdynrela != __kpatch_dynrelas_end; kdynrela++) { ret = patch_add_reloc_to_object(kdynrela); if (ret) goto out; } /* past this point, only possible return code is -ENOMEM */ ret = -ENOMEM; /* allocate and fill livepatch structures */ lpatch = kzalloc(sizeof(*lpatch), GFP_KERNEL); if (!lpatch) goto out; lobjects = kzalloc(sizeof(*lobjects) * (patch_objects_nr+1), GFP_KERNEL); if (!lobjects) goto out; lpatch->mod = THIS_MODULE; lpatch->objs = lobjects; i = 0; list_for_each_entry(object, &patch_objects, list) { lobject = &lobjects[i]; lobject->name = object->name; lfuncs = kzalloc(sizeof(struct klp_func) * (object->funcs_nr+1), GFP_KERNEL); if (!lfuncs) goto out; lobject->funcs = lfuncs; j = 0; list_for_each_entry(func, &object->funcs, list) { lfunc = &lfuncs[j]; lfunc->old_name = func->kfunc->name; lfunc->new_func = (void *)func->kfunc->new_addr; #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) lfunc->old_sympos = func->kfunc->sympos; #else lfunc->old_addr = func->kfunc->old_addr; #endif j++; } lrelocs = kzalloc(sizeof(struct klp_reloc) * (object->relocs_nr+1), GFP_KERNEL); if (!lrelocs) goto out; lobject->relocs = lrelocs; j = 0; list_for_each_entry(reloc, &object->relocs, list) { lreloc = &lrelocs[j]; lreloc->loc = reloc->kdynrela->dest; #if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 5, 0) lreloc->sympos = reloc->kdynrela->sympos; #else lreloc->val = reloc->kdynrela->src; #endif lreloc->type = reloc->kdynrela->type; lreloc->name = reloc->kdynrela->name; lreloc->addend = reloc->kdynrela->addend; lreloc->external = reloc->kdynrela->external; j++; } i++; } /* * Once the patch structure that the live patching API expects * has been built, we can release the scaffold structure. */ patch_free_scaffold(); ret = klp_register_patch(lpatch); if (ret) { patch_free_livepatch(lpatch); return ret; } ret = klp_enable_patch(lpatch); if (ret) { WARN_ON(klp_unregister_patch(lpatch)); patch_free_livepatch(lpatch); return ret; } return 0; out: patch_free_livepatch(lpatch); patch_free_scaffold(); return ret; } static void __exit patch_exit(void) { WARN_ON(klp_unregister_patch(lpatch)); } module_init(patch_init); module_exit(patch_exit); MODULE_LICENSE("GPL"); kpatch-0.3.2/kmod/patch/patch-hook.c000066400000000000000000000016071266116401600172270ustar00rootroot00000000000000/* * Copyright (C) 2015 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #if IS_ENABLED(CONFIG_LIVEPATCH) #include "livepatch-patch-hook.c" #else #include "kpatch-patch-hook.c" #endif kpatch-0.3.2/kpatch-build/000077500000000000000000000000001266116401600153405ustar00rootroot00000000000000kpatch-0.3.2/kpatch-build/Makefile000066400000000000000000000012371266116401600170030ustar00rootroot00000000000000include ../Makefile.inc CFLAGS += -I../kmod/patch -Iinsn -Wall -g -Werror LDFLAGS += -lelf TARGETS = create-diff-object OBJS = create-diff-object.o lookup.o insn/insn.o insn/inat.o SOURCES = create-diff-object.c lookup.c insn/insn.c insn/inat.c all: $(TARGETS) -include $(SOURCES:.c=.d) %.o : %.c $(CC) -MMD -MP $(CFLAGS) -c -o $@ $< create-diff-object: $(OBJS) $(CC) $(CFLAGS) $^ -o $@ $(LDFLAGS) install: all $(INSTALL) -d $(LIBEXECDIR) $(INSTALL) $(TARGETS) kpatch-gcc $(LIBEXECDIR) $(INSTALL) -d $(BINDIR) $(INSTALL) kpatch-build $(BINDIR) uninstall: $(RM) -R $(LIBEXECDIR) $(RM) $(BINDIR)/kpatch-build clean: $(RM) $(TARGETS) $(OBJS) *.d insn/*.d kpatch-0.3.2/kpatch-build/create-diff-object.c000066400000000000000000002440461266116401600211330ustar00rootroot00000000000000/* * create-diff-object.c * * Copyright (C) 2014 Seth Jennings * Copyright (C) 2013-2014 Josh Poimboeuf * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ /* * This file contains the heart of the ELF object differencing engine. * * The tool takes two ELF objects from two versions of the same source * file; a "base" object and a "patched" object. These object need to have * been compiled with the -ffunction-sections and -fdata-sections GCC options. * * The tool compares the objects at a section level to determine what * sections have changed. Once a list of changed sections has been generated, * various rules are applied to determine any object local sections that * are dependencies of the changed section and also need to be included in * the output object. */ #include #include #include #include #include #include #include #include #include #include #include #include "list.h" #include "lookup.h" #include "asm/insn.h" #include "kpatch-patch.h" #define ERROR(format, ...) \ error(1, 0, "ERROR: %s: %s: %d: " format, childobj, __FUNCTION__, __LINE__, ##__VA_ARGS__) #define DIFF_FATAL(format, ...) \ ({ \ fprintf(stderr, "ERROR: %s: " format "\n", childobj, ##__VA_ARGS__); \ error(2, 0, "unreconcilable difference"); \ }) #define log_debug(format, ...) log(DEBUG, format, ##__VA_ARGS__) #define log_normal(format, ...) log(NORMAL, "%s: " format, childobj, ##__VA_ARGS__) #define log(level, format, ...) \ ({ \ if (loglevel <= (level)) \ printf(format, ##__VA_ARGS__); \ }) char *childobj; enum loglevel { DEBUG, NORMAL }; static enum loglevel loglevel = NORMAL; /******************* * Data structures * ****************/ struct section; struct symbol; struct rela; enum status { NEW, CHANGED, SAME }; struct section { struct list_head list; struct section *twin; GElf_Shdr sh; Elf_Data *data; char *name; int index; enum status status; int include; int ignore; int grouped; union { struct { /* if (is_rela_section()) */ struct section *base; struct list_head relas; }; struct { /* else */ struct section *rela; struct symbol *secsym, *sym; }; }; }; struct symbol { struct list_head list; struct symbol *twin; struct section *sec; GElf_Sym sym; char *name; int index; unsigned char bind, type; enum status status; union { int include; /* used in the patched elf */ int strip; /* used in the output elf */ }; int has_fentry_call; }; struct rela { struct list_head list; GElf_Rela rela; struct symbol *sym; unsigned int type; int addend; int offset; char *string; }; struct string { struct list_head list; char *name; }; struct kpatch_elf { Elf *elf; struct list_head sections; struct list_head symbols; struct list_head strings; int fd; }; struct special_section { char *name; int (*group_size)(struct kpatch_elf *kelf, int offset); }; /******************* * Helper functions ******************/ char *status_str(enum status status) { switch(status) { case NEW: return "NEW"; case CHANGED: return "CHANGED"; case SAME: return "SAME"; default: ERROR("status_str"); } /* never reached */ return NULL; } int is_rela_section(struct section *sec) { return (sec->sh.sh_type == SHT_RELA); } int is_text_section(struct section *sec) { return (sec->sh.sh_type == SHT_PROGBITS && (sec->sh.sh_flags & SHF_EXECINSTR)); } int is_debug_section(struct section *sec) { char *name; if (is_rela_section(sec)) name = sec->base->name; else name = sec->name; return !strncmp(name, ".debug_", 7); } struct section *find_section_by_index(struct list_head *list, unsigned int index) { struct section *sec; list_for_each_entry(sec, list, list) if (sec->index == index) return sec; return NULL; } struct section *find_section_by_name(struct list_head *list, const char *name) { struct section *sec; list_for_each_entry(sec, list, list) if (!strcmp(sec->name, name)) return sec; return NULL; } struct symbol *find_symbol_by_index(struct list_head *list, size_t index) { struct symbol *sym; list_for_each_entry(sym, list, list) if (sym->index == index) return sym; return NULL; } struct symbol *find_symbol_by_name(struct list_head *list, const char *name) { struct symbol *sym; list_for_each_entry(sym, list, list) if (sym->name && !strcmp(sym->name, name)) return sym; return NULL; } #define ALLOC_LINK(_new, _list) \ { \ (_new) = malloc(sizeof(*(_new))); \ if (!(_new)) \ ERROR("malloc"); \ memset((_new), 0, sizeof(*(_new))); \ INIT_LIST_HEAD(&(_new)->list); \ list_add_tail(&(_new)->list, (_list)); \ } /* returns the offset of the string in the string table */ int offset_of_string(struct list_head *list, char *name) { struct string *string; int index = 0; /* try to find string in the string list */ list_for_each_entry(string, list, list) { if (!strcmp(string->name, name)) return index; index += strlen(string->name) + 1; } /* allocate a new string */ ALLOC_LINK(string, list); string->name = name; return index; } /************* * Functions * **********/ void kpatch_create_rela_list(struct kpatch_elf *kelf, struct section *sec) { int rela_nr, index = 0, skip = 0; struct rela *rela; unsigned int symndx; /* find matching base (text/data) section */ sec->base = find_section_by_name(&kelf->sections, sec->name + 5); if (!sec->base) ERROR("can't find base section for rela section %s", sec->name); /* create reverse link from base section to this rela section */ sec->base->rela = sec; rela_nr = sec->sh.sh_size / sec->sh.sh_entsize; log_debug("\n=== rela list for %s (%d entries) ===\n", sec->base->name, rela_nr); if (is_debug_section(sec)) { log_debug("skipping rela listing for .debug_* section\n"); skip = 1; } /* read and store the rela entries */ while (rela_nr--) { ALLOC_LINK(rela, &sec->relas); if (!gelf_getrela(sec->data, index, &rela->rela)) ERROR("gelf_getrela"); index++; rela->type = GELF_R_TYPE(rela->rela.r_info); rela->addend = rela->rela.r_addend; rela->offset = rela->rela.r_offset; symndx = GELF_R_SYM(rela->rela.r_info); rela->sym = find_symbol_by_index(&kelf->symbols, symndx); if (!rela->sym) ERROR("could not find rela entry symbol\n"); if (rela->sym->sec && (rela->sym->sec->sh.sh_flags & SHF_STRINGS)) { rela->string = rela->sym->sec->data->d_buf + rela->addend; if (!rela->string) ERROR("could not lookup rela string for %s+%d", rela->sym->name, rela->addend); } if (skip) continue; log_debug("offset %d, type %d, %s %s %d", rela->offset, rela->type, rela->sym->name, (rela->addend < 0)?"-":"+", abs(rela->addend)); if (rela->string) log_debug(" (string = %s)", rela->string); log_debug("\n"); } } void kpatch_create_section_list(struct kpatch_elf *kelf) { Elf_Scn *scn = NULL; struct section *sec; size_t shstrndx, sections_nr; if (elf_getshdrnum(kelf->elf, §ions_nr)) ERROR("elf_getshdrnum"); /* * elf_getshdrnum() includes section index 0 but elf_nextscn * doesn't return that section so subtract one. */ sections_nr--; if (elf_getshdrstrndx(kelf->elf, &shstrndx)) ERROR("elf_getshdrstrndx"); log_debug("=== section list (%zu) ===\n", sections_nr); while (sections_nr--) { ALLOC_LINK(sec, &kelf->sections); scn = elf_nextscn(kelf->elf, scn); if (!scn) ERROR("scn NULL"); if (!gelf_getshdr(scn, &sec->sh)) ERROR("gelf_getshdr"); sec->name = elf_strptr(kelf->elf, shstrndx, sec->sh.sh_name); if (!sec->name) ERROR("elf_strptr"); sec->data = elf_getdata(scn, NULL); if (!sec->data) ERROR("elf_getdata"); sec->index = elf_ndxscn(scn); log_debug("ndx %02d, data %p, size %zu, name %s\n", sec->index, sec->data->d_buf, sec->data->d_size, sec->name); } /* Sanity check, one more call to elf_nextscn() should return NULL */ if (elf_nextscn(kelf->elf, scn)) ERROR("expected NULL"); } int is_bundleable(struct symbol *sym) { if (sym->type == STT_FUNC && !strncmp(sym->sec->name, ".text.",6) && !strcmp(sym->sec->name + 6, sym->name)) return 1; if (sym->type == STT_FUNC && !strncmp(sym->sec->name, ".text.unlikely.",15) && !strcmp(sym->sec->name + 15, sym->name)) return 1; if (sym->type == STT_OBJECT && !strncmp(sym->sec->name, ".data.",6) && !strcmp(sym->sec->name + 6, sym->name)) return 1; if (sym->type == STT_OBJECT && !strncmp(sym->sec->name, ".rodata.",8) && !strcmp(sym->sec->name + 8, sym->name)) return 1; if (sym->type == STT_OBJECT && !strncmp(sym->sec->name, ".bss.",5) && !strcmp(sym->sec->name + 5, sym->name)) return 1; return 0; } void kpatch_create_symbol_list(struct kpatch_elf *kelf) { struct section *symtab; struct symbol *sym; int symbols_nr, index = 0; symtab = find_section_by_name(&kelf->sections, ".symtab"); if (!symtab) ERROR("missing symbol table"); symbols_nr = symtab->sh.sh_size / symtab->sh.sh_entsize; log_debug("\n=== symbol list (%d entries) ===\n", symbols_nr); while (symbols_nr--) { ALLOC_LINK(sym, &kelf->symbols); sym->index = index; if (!gelf_getsym(symtab->data, index, &sym->sym)) ERROR("gelf_getsym"); index++; sym->name = elf_strptr(kelf->elf, symtab->sh.sh_link, sym->sym.st_name); if (!sym->name) ERROR("elf_strptr"); sym->type = GELF_ST_TYPE(sym->sym.st_info); sym->bind = GELF_ST_BIND(sym->sym.st_info); if (sym->sym.st_shndx > SHN_UNDEF && sym->sym.st_shndx < SHN_LORESERVE) { sym->sec = find_section_by_index(&kelf->sections, sym->sym.st_shndx); if (!sym->sec) ERROR("couldn't find section for symbol %s\n", sym->name); if (is_bundleable(sym)) { if (sym->sym.st_value != 0) ERROR("symbol %s at offset %lu within section %s, expected 0", sym->name, sym->sym.st_value, sym->sec->name); sym->sec->sym = sym; } else if (sym->type == STT_SECTION) { sym->sec->secsym = sym; /* use the section name as the symbol name */ sym->name = sym->sec->name; } } log_debug("sym %02d, type %d, bind %d, ndx %02d, name %s", sym->index, sym->type, sym->bind, sym->sym.st_shndx, sym->name); if (sym->sec) log_debug(" -> %s", sym->sec->name); log_debug("\n"); } } /* Check which functions have fentry calls; save this info for later use. */ static void kpatch_find_fentry_calls(struct kpatch_elf *kelf) { struct symbol *sym; struct rela *rela; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type != STT_FUNC || !sym->sec->rela) continue; rela = list_first_entry(&sym->sec->rela->relas, struct rela, list); if (rela->type != R_X86_64_NONE || strcmp(rela->sym->name, "__fentry__")) continue; sym->has_fentry_call = 1; } } struct kpatch_elf *kpatch_elf_open(const char *name) { Elf *elf; int fd; struct kpatch_elf *kelf; struct section *sec; fd = open(name, O_RDONLY); if (fd == -1) ERROR("open"); elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); if (!elf) ERROR("elf_begin"); kelf = malloc(sizeof(*kelf)); if (!kelf) ERROR("malloc"); memset(kelf, 0, sizeof(*kelf)); INIT_LIST_HEAD(&kelf->sections); INIT_LIST_HEAD(&kelf->symbols); INIT_LIST_HEAD(&kelf->strings); /* read and store section, symbol entries from file */ kelf->elf = elf; kelf->fd = fd; kpatch_create_section_list(kelf); kpatch_create_symbol_list(kelf); /* for each rela section, read and store the rela entries */ list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec)) continue; INIT_LIST_HEAD(&sec->relas); kpatch_create_rela_list(kelf, sec); } kpatch_find_fentry_calls(kelf); return kelf; } /* * This function detects whether the given symbol is a "special" static local * variable (for lack of a better term). * * Special static local variables should never be correlated and should always * be included if they are referenced by an included function. */ static int is_special_static(struct symbol *sym) { static char *prefixes[] = { "__key.", "__warned.", "descriptor.", "__func__.", "_rs.", NULL, }; char **prefix; if (!sym) return 0; if (sym->type == STT_SECTION) { /* __verbose section contains the descriptor variables */ if (!strcmp(sym->name, "__verbose")) return 1; /* otherwise make sure section is bundled */ if (!sym->sec->sym) return 0; /* use bundled object/function symbol for matching */ sym = sym->sec->sym; } if (sym->type != STT_OBJECT || sym->bind != STB_LOCAL) return 0; for (prefix = prefixes; *prefix; prefix++) if (!strncmp(sym->name, *prefix, strlen(*prefix))) return 1; return 0; } /* * This is like strcmp, but for gcc-mangled symbols. It skips the comparison * of any substring which consists of '.' followed by any number of digits. */ static int kpatch_mangled_strcmp(char *s1, char *s2) { while (*s1 == *s2) { if (!*s1) return 0; if (*s1 == '.' && isdigit(s1[1])) { if (!isdigit(s2[1])) return 1; while (isdigit(*++s1)) ; while (isdigit(*++s2)) ; } else { s1++; s2++; } } return 1; } int rela_equal(struct rela *rela1, struct rela *rela2) { if (rela1->type != rela2->type || rela1->offset != rela2->offset) return 0; if (rela1->string) return rela2->string && !strcmp(rela1->string, rela2->string); if (rela1->addend != rela2->addend) return 0; if (is_special_static(rela1->sym)) return !kpatch_mangled_strcmp(rela1->sym->name, rela2->sym->name); return !strcmp(rela1->sym->name, rela2->sym->name); } void kpatch_compare_correlated_rela_section(struct section *sec) { struct rela *rela1, *rela2 = NULL; rela2 = list_entry(sec->twin->relas.next, struct rela, list); list_for_each_entry(rela1, &sec->relas, list) { if (rela_equal(rela1, rela2)) { rela2 = list_entry(rela2->list.next, struct rela, list); continue; } sec->status = CHANGED; return; } sec->status = SAME; } void kpatch_compare_correlated_nonrela_section(struct section *sec) { struct section *sec1 = sec, *sec2 = sec->twin; if (sec1->sh.sh_type != SHT_NOBITS && memcmp(sec1->data->d_buf, sec2->data->d_buf, sec1->data->d_size)) sec->status = CHANGED; else sec->status = SAME; } void kpatch_compare_correlated_section(struct section *sec) { struct section *sec1 = sec, *sec2 = sec->twin; /* Compare section headers (must match or fatal) */ if (sec1->sh.sh_type != sec2->sh.sh_type || sec1->sh.sh_flags != sec2->sh.sh_flags || sec1->sh.sh_addr != sec2->sh.sh_addr || sec1->sh.sh_addralign != sec2->sh.sh_addralign || sec1->sh.sh_entsize != sec2->sh.sh_entsize) DIFF_FATAL("%s section header details differ", sec1->name); /* Short circuit for mcount sections, we rebuild regardless */ if (!strcmp(sec->name, ".rela__mcount_loc") || !strcmp(sec->name, "__mcount_loc")) { sec->status = SAME; goto out; } if (sec1->sh.sh_size != sec2->sh.sh_size || sec1->data->d_size != sec2->data->d_size) { sec->status = CHANGED; goto out; } if (is_rela_section(sec)) kpatch_compare_correlated_rela_section(sec); else kpatch_compare_correlated_nonrela_section(sec); out: if (sec->status == CHANGED) log_debug("section %s has changed\n", sec->name); } /* * Determine if a section has changed only due to a WARN* macro call's * embedding of the line number into an instruction operand. * * Warning: Hackery lies herein. It's hopefully justified by the fact that * this issue is very common. * * base object: * * be 5e 04 00 00 mov $0x45e,%esi * 48 c7 c7 ff 5a a1 81 mov $0xffffffff81a15aff,%rdi * e8 26 13 08 00 callq ffffffff8108d0d0 * * patched object: * * be 5e 04 00 00 mov $0x45f,%esi * 48 c7 c7 ff 5a a1 81 mov $0xffffffff81a15aff,%rdi * e8 26 13 08 00 callq ffffffff8108d0d0 * * The above is the most common case. The pattern which applies to all cases * is an immediate move of the line number to %esi followed by zero or more * relas to a string section followed by a rela to warn_slowpath_*. * */ static int kpatch_warn_only_change(struct section *sec) { struct insn insn1, insn2; unsigned long start1, start2, size, offset, length; struct rela *rela; int warnonly = 0, found; if (sec->status != CHANGED || is_rela_section(sec) || !is_text_section(sec) || sec->sh.sh_size != sec->twin->sh.sh_size || !sec->rela || sec->rela->status != SAME) return 0; start1 = (unsigned long)sec->twin->data->d_buf; start2 = (unsigned long)sec->data->d_buf; size = sec->sh.sh_size; for (offset = 0; offset < size; offset += length) { insn_init(&insn1, (void *)(start1 + offset), 1); insn_init(&insn2, (void *)(start2 + offset), 1); insn_get_length(&insn1); insn_get_length(&insn2); length = insn1.length; if (!insn1.length || !insn2.length) ERROR("can't decode instruction in section %s at offset 0x%lx", sec->name, offset); if (insn1.length != insn2.length) return 0; if (!memcmp((void *)start1 + offset, (void *)start2 + offset, length)) continue; /* verify it's a mov immediate to %esi */ insn_get_opcode(&insn1); insn_get_opcode(&insn2); if (insn1.opcode.value != 0xbe || insn2.opcode.value != 0xbe) return 0; /* * Verify zero or more string relas followed by a * warn_slowpath_* rela. */ found = 0; list_for_each_entry(rela, &sec->rela->relas, list) { if (rela->offset < offset + length) continue; if (rela->string) continue; if (!strncmp(rela->sym->name, "warn_slowpath_", 14)) { found = 1; break; } return 0; } if (!found) return 0; warnonly = 1; } if (!warnonly) ERROR("no instruction changes detected for changed section %s", sec->name); return 1; } void kpatch_compare_sections(struct list_head *seclist) { struct section *sec; /* compare all sections */ list_for_each_entry(sec, seclist, list) { if (sec->twin) kpatch_compare_correlated_section(sec); else sec->status = NEW; } /* exclude WARN-only changes */ list_for_each_entry(sec, seclist, list) { if (kpatch_warn_only_change(sec)) { log_debug("reverting WARN-only section %s status to SAME\n", sec->name); sec->status = SAME; } } /* sync symbol status */ list_for_each_entry(sec, seclist, list) { if (is_rela_section(sec)) { if (sec->base->sym && sec->base->sym->status != CHANGED) sec->base->sym->status = sec->status; } else { if (sec->sym && sec->sym->status != CHANGED) sec->sym->status = sec->status; } } } void kpatch_compare_correlated_symbol(struct symbol *sym) { struct symbol *sym1 = sym, *sym2 = sym->twin; if (sym1->sym.st_info != sym2->sym.st_info || sym1->sym.st_other != sym2->sym.st_other || (sym1->sec && !sym2->sec) || (sym2->sec && !sym1->sec)) DIFF_FATAL("symbol info mismatch: %s", sym1->name); /* * If two symbols are correlated but their sections are not, then the * symbol has changed sections. This is only allowed if the symbol is * moving out of an ignored section. */ if (sym1->sec && sym2->sec && sym1->sec->twin != sym2->sec) { if (sym2->sec->twin && sym2->sec->twin->ignore) sym->status = CHANGED; else DIFF_FATAL("symbol changed sections: %s", sym1->name); } if (sym1->type == STT_OBJECT && sym1->sym.st_size != sym2->sym.st_size) DIFF_FATAL("object size mismatch: %s", sym1->name); if (sym1->sym.st_shndx == SHN_UNDEF || sym1->sym.st_shndx == SHN_ABS) sym1->status = SAME; /* * The status of LOCAL symbols is dependent on the status of their * matching section and is set during section comparison. */ } void kpatch_compare_symbols(struct list_head *symlist) { struct symbol *sym; list_for_each_entry(sym, symlist, list) { if (sym->twin) kpatch_compare_correlated_symbol(sym); else sym->status = NEW; log_debug("symbol %s is %s\n", sym->name, status_str(sym->status)); } } void kpatch_correlate_sections(struct list_head *seclist1, struct list_head *seclist2) { struct section *sec1, *sec2; list_for_each_entry(sec1, seclist1, list) { list_for_each_entry(sec2, seclist2, list) { if (strcmp(sec1->name, sec2->name)) continue; if (is_special_static(is_rela_section(sec1) ? sec1->base->secsym : sec1->secsym)) continue; /* * Group sections must match exactly to be correlated. * Changed group sections are currently not supported. */ if (sec1->sh.sh_type == SHT_GROUP) { if (sec1->data->d_size != sec2->data->d_size) continue; if (memcmp(sec1->data->d_buf, sec2->data->d_buf, sec1->data->d_size)) continue; } sec1->twin = sec2; sec2->twin = sec1; /* set initial status, might change */ sec1->status = sec2->status = SAME; break; } } } void kpatch_correlate_symbols(struct list_head *symlist1, struct list_head *symlist2) { struct symbol *sym1, *sym2; list_for_each_entry(sym1, symlist1, list) { list_for_each_entry(sym2, symlist2, list) { if (strcmp(sym1->name, sym2->name) || sym1->type != sym2->type) continue; if (is_special_static(sym1)) continue; /* group section symbols must have correlated sections */ if (sym1->sec && sym1->sec->sh.sh_type == SHT_GROUP && sym1->sec->twin != sym2->sec) continue; sym1->twin = sym2; sym2->twin = sym1; /* set initial status, might change */ sym1->status = sym2->status = SAME; break; } } } void kpatch_compare_elf_headers(Elf *elf1, Elf *elf2) { GElf_Ehdr eh1, eh2; if (!gelf_getehdr(elf1, &eh1)) ERROR("gelf_getehdr"); if (!gelf_getehdr(elf2, &eh2)) ERROR("gelf_getehdr"); if (memcmp(eh1.e_ident, eh2.e_ident, EI_NIDENT) || eh1.e_type != eh2.e_type || eh1.e_machine != eh2.e_machine || eh1.e_version != eh2.e_version || eh1.e_entry != eh2.e_entry || eh1.e_phoff != eh2.e_phoff || eh1.e_flags != eh2.e_flags || eh1.e_ehsize != eh2.e_ehsize || eh1.e_phentsize != eh2.e_phentsize || eh1.e_shentsize != eh2.e_shentsize) DIFF_FATAL("ELF headers differ"); } void kpatch_check_program_headers(Elf *elf) { size_t ph_nr; if (elf_getphdrnum(elf, &ph_nr)) ERROR("elf_getphdrnum"); if (ph_nr != 0) DIFF_FATAL("ELF contains program header"); } void kpatch_mark_grouped_sections(struct kpatch_elf *kelf) { struct section *groupsec, *sec; unsigned int *data, *end; list_for_each_entry(groupsec, &kelf->sections, list) { if (groupsec->sh.sh_type != SHT_GROUP) continue; data = groupsec->data->d_buf; end = groupsec->data->d_buf + groupsec->data->d_size; data++; /* skip first flag word (e.g. GRP_COMDAT) */ while (data < end) { sec = find_section_by_index(&kelf->sections, *data); if (!sec) ERROR("group section not found"); sec->grouped = 1; log_debug("marking section %s (%d) as grouped\n", sec->name, sec->index); data++; } } } /* * When gcc makes compiler optimizations which affect a function's calling * interface, it mangles the function's name. For example, sysctl_print_dir is * renamed to sysctl_print_dir.isra.2. The problem is that the trailing number * is chosen arbitrarily, and the patched version of the function may end up * with a different trailing number. Rename any mangled patched functions to * match their base counterparts. */ void kpatch_rename_mangled_functions(struct kpatch_elf *base, struct kpatch_elf *patched) { struct symbol *sym, *basesym; char name[256], *origname; struct section *sec, *basesec; int found; list_for_each_entry(sym, &patched->symbols, list) { if (sym->type != STT_FUNC) continue; if (!strstr(sym->name, ".isra.") && !strstr(sym->name, ".constprop.") && !strstr(sym->name, ".part.")) continue; found = 0; list_for_each_entry(basesym, &base->symbols, list) { if (!kpatch_mangled_strcmp(basesym->name, sym->name)) { found = 1; break; } } if (!found) continue; if (!strcmp(sym->name, basesym->name)) continue; log_debug("renaming %s to %s\n", sym->name, basesym->name); origname = sym->name; sym->name = strdup(basesym->name); if (sym != sym->sec->sym) continue; sym->sec->name = strdup(basesym->sec->name); if (sym->sec->rela) sym->sec->rela->name = strdup(basesym->sec->rela->name); /* * When function foo.isra.1 has a switch statement, it might * have a corresponding bundled .rodata.foo.isra.1 section (in * addition to .text.foo.isra.1 which we renamed above). */ sprintf(name, ".rodata.%s", origname); sec = find_section_by_name(&patched->sections, name); if (!sec) continue; sprintf(name, ".rodata.%s", basesym->name); basesec = find_section_by_name(&base->sections, name); if (!basesec) continue; sec->name = strdup(basesec->name); sec->secsym->name = sec->name; if (sec->rela) sec->rela->name = strdup(basesec->rela->name); } } static char *kpatch_section_function_name(struct section *sec) { if (is_rela_section(sec)) sec = sec->base; return sec->sym ? sec->sym->name : sec->name; } /* * Given a static local variable symbol and a rela section which references it * in the base object, find a corresponding usage of a similarly named symbol * in the patched object. */ static struct symbol *kpatch_find_static_twin(struct section *sec, struct symbol *sym) { struct rela *rela; if (!sec->twin) return NULL; /* find the patched object's corresponding variable */ list_for_each_entry(rela, &sec->twin->relas, list) { if (rela->sym->twin) continue; if (kpatch_mangled_strcmp(rela->sym->name, sym->name)) continue; return rela->sym; } return NULL; } static int kpatch_is_normal_static_local(struct symbol *sym) { if (sym->type != STT_OBJECT || sym->bind != STB_LOCAL) return 0; if (!strchr(sym->name, '.')) return 0; if (is_special_static(sym)) return 0; return 1; } /* * gcc renames static local variables by appending a period and a number. For * example, __foo could be renamed to __foo.31452. Unfortunately this number * can arbitrarily change. Correlate them by comparing which functions * reference them, and rename the patched symbols to match the base symbol * names. * * Some surprising facts about static local variable symbols: * * - It's possible for multiple functions to use the same * static local variable if the variable is defined in an * inlined function. * * - It's also possible for multiple static local variables * with the same name to be used in the same function if they * have different scopes. (We have to assume that in such * cases, the order in which they're referenced remains the * same between the base and patched objects, as there's no * other way to distinguish them.) * * - Static locals are usually referenced by functions, but * they can occasionally be referenced by data sections as * well. */ void kpatch_correlate_static_local_variables(struct kpatch_elf *base, struct kpatch_elf *patched) { struct symbol *sym, *patched_sym; struct section *sec; struct rela *rela, *rela2; int bundled, patched_bundled, found; /* * First undo the correlations for all static locals. Two static * locals can have the same numbered suffix in the base and patched * objects by coincidence. */ list_for_each_entry(sym, &base->symbols, list) { if (!kpatch_is_normal_static_local(sym)) continue; if (sym->twin) { sym->twin->twin = NULL; sym->twin = NULL; } bundled = sym == sym->sec->sym; if (bundled && sym->sec->twin) { sym->sec->twin->twin = NULL; sym->sec->twin = NULL; sym->sec->secsym->twin->twin = NULL; sym->sec->secsym->twin = NULL; if (sym->sec->rela) { sym->sec->rela->twin->twin = NULL; sym->sec->rela->twin = NULL; } } } /* * Do the correlations: for each section reference to a static local, * look for a corresponding reference in the section's twin. */ list_for_each_entry(sec, &base->sections, list) { if (!is_rela_section(sec) || is_debug_section(sec)) continue; list_for_each_entry(rela, &sec->relas, list) { sym = rela->sym; if (!kpatch_is_normal_static_local(sym)) continue; if (sym->twin) continue; bundled = sym == sym->sec->sym; if (bundled && sym->sec == sec->base) { /* * A rare case where a static local data * structure references itself. There's no * reliable way to correlate this. Hopefully * there's another reference to the symbol * somewhere that can be used. */ log_debug("can't correlate static local %s's reference to itself\n", sym->name); continue; } patched_sym = kpatch_find_static_twin(sec, sym); if (!patched_sym) DIFF_FATAL("reference to static local variable %s in %s was removed", sym->name, kpatch_section_function_name(sec)); patched_bundled = patched_sym == patched_sym->sec->sym; if (bundled != patched_bundled) ERROR("bundle mismatch for symbol %s", sym->name); if (!bundled && sym->sec->twin != patched_sym->sec) ERROR("sections %s and %s aren't correlated", sym->sec->name, patched_sym->sec->name); log_debug("renaming and correlating static local %s to %s\n", patched_sym->name, sym->name); patched_sym->name = strdup(sym->name); sym->twin = patched_sym; patched_sym->twin = sym; if (bundled) { sym->sec->twin = patched_sym->sec; patched_sym->sec->twin = sym->sec; sym->sec->secsym->twin = patched_sym->sec->secsym; patched_sym->sec->secsym->twin = sym->sec->secsym; if (sym->sec->rela && patched_sym->sec->rela) { sym->sec->rela->twin = patched_sym->sec->rela; patched_sym->sec->rela->twin = sym->sec->rela; } } } } /* * Make sure that: * * 1. all the base object's referenced static locals have been * correlated; and * * 2. each reference to a static local in the base object has a * corresponding reference in the patched object (because a static * local can be referenced by more than one section). */ list_for_each_entry(sec, &base->sections, list) { if (!is_rela_section(sec) || is_debug_section(sec)) continue; list_for_each_entry(rela, &sec->relas, list) { sym = rela->sym; if (!kpatch_is_normal_static_local(sym)) continue; if (!sym->twin || !sec->twin) DIFF_FATAL("reference to static local variable %s in %s was removed", sym->name, kpatch_section_function_name(sec)); found = 0; list_for_each_entry(rela2, &sec->twin->relas, list) { if (rela2->sym == sym->twin) { found = 1; break; } } if (!found) DIFF_FATAL("static local %s has been correlated with %s, but patched %s is missing a reference to it", sym->name, sym->twin->name, kpatch_section_function_name(sec->twin)); } } /* * Now go through the patched object and look for any uncorrelated * static locals to see if we need to print any warnings about new * variables. */ list_for_each_entry(sec, &patched->sections, list) { if (!is_rela_section(sec) || is_debug_section(sec)) continue; list_for_each_entry(rela, &sec->relas, list) { sym = rela->sym; if (!kpatch_is_normal_static_local(sym)) continue; if (sym->twin) continue; log_normal("WARNING: unable to correlate static local variable %s used by %s, assuming variable is new\n", sym->name, kpatch_section_function_name(sec)); return; } } } void kpatch_correlate_elfs(struct kpatch_elf *kelf1, struct kpatch_elf *kelf2) { kpatch_correlate_sections(&kelf1->sections, &kelf2->sections); kpatch_correlate_symbols(&kelf1->symbols, &kelf2->symbols); } void kpatch_compare_correlated_elements(struct kpatch_elf *kelf) { /* lists are already correlated at this point */ kpatch_compare_sections(&kelf->sections); kpatch_compare_symbols(&kelf->symbols); } void rela_insn(struct section *sec, struct rela *rela, struct insn *insn) { unsigned long insn_addr, start, end, rela_addr; start = (unsigned long)sec->base->data->d_buf; end = start + sec->base->sh.sh_size; rela_addr = start + rela->offset; for (insn_addr = start; insn_addr < end; insn_addr += insn->length) { insn_init(insn, (void *)insn_addr, 1); insn_get_length(insn); if (!insn->length) ERROR("can't decode instruction in section %s at offset 0x%lx", sec->base->name, insn_addr); if (rela_addr >= insn_addr && rela_addr < insn_addr + insn->length) return; } } /* * Mangle the relas a little. The compiler will sometimes use section symbols * to reference local objects and functions rather than the object or function * symbols themselves. We substitute the object/function symbols for the * section symbol in this case so that the relas can be properly correlated and * so that the existing object/function in vmlinux can be linked to. */ void kpatch_replace_sections_syms(struct kpatch_elf *kelf) { struct section *sec; struct rela *rela; struct symbol *sym; int add_off; log_debug("\n"); list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec) || is_debug_section(sec)) continue; list_for_each_entry(rela, &sec->relas, list) { if (rela->sym->type != STT_SECTION) continue; /* * Replace references to bundled sections with their * symbols. */ if (rela->sym->sec && rela->sym->sec->sym) { rela->sym = rela->sym->sec->sym; continue; } if (rela->type == R_X86_64_PC32) { struct insn insn; rela_insn(sec, rela, &insn); add_off = (long)insn.next_byte - (long)sec->base->data->d_buf - rela->offset; } else if (rela->type == R_X86_64_64 || rela->type == R_X86_64_32S) add_off = 0; else continue; /* * Attempt to replace references to unbundled sections * with their symbols. */ list_for_each_entry(sym, &kelf->symbols, list) { int start, end; if (sym->type == STT_SECTION || sym->sec != rela->sym->sec) continue; start = sym->sym.st_value; end = sym->sym.st_value + sym->sym.st_size; if (!is_text_section(sym->sec) && rela->type == R_X86_64_32S && rela->addend == sym->sec->sh.sh_size && end == sym->sec->sh.sh_size) { /* * A special case where gcc needs a * pointer to the address at the end of * a data section. * * This is usually used with a compare * instruction to determine when to end * a loop. The code doesn't actually * dereference the pointer so this is * "normal" and we just replace the * section reference with a reference * to the last symbol in the section. * * Note that this only catches the * issue when it happens at the end of * a section. It can also happen in * the middle of a section. In that * case, the wrong symbol will be * associated with the reference. But * that's ok because: * * 1) This situation only occurs when * gcc is trying to get the address * of the symbol, not the contents * of its data; and * * 2) Because kpatch doesn't allow data * sections to change, * &(var1+sizeof(var1)) will always * be the same as &var2. */ } else if (rela->addend + add_off < start || rela->addend + add_off >= end) continue; log_debug("%s: replacing %s+%d reference with %s+%d\n", sec->name, rela->sym->name, rela->addend, sym->name, rela->addend - start); rela->sym = sym; rela->addend -= start; break; } } } log_debug("\n"); } void kpatch_dump_kelf(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; struct rela *rela; if (loglevel > DEBUG) return; printf("\n=== Sections ===\n"); list_for_each_entry(sec, &kelf->sections, list) { printf("%02d %s (%s)", sec->index, sec->name, status_str(sec->status)); if (is_rela_section(sec)) { printf(", base-> %s\n", sec->base->name); /* skip .debug_* sections */ if (is_debug_section(sec)) goto next; printf("rela section expansion\n"); list_for_each_entry(rela, &sec->relas, list) { printf("sym %d, offset %d, type %d, %s %s %d\n", rela->sym->index, rela->offset, rela->type, rela->sym->name, (rela->addend < 0)?"-":"+", abs(rela->addend)); } } else { if (sec->sym) printf(", sym-> %s", sec->sym->name); if (sec->secsym) printf(", secsym-> %s", sec->secsym->name); if (sec->rela) printf(", rela-> %s", sec->rela->name); } next: printf("\n"); } printf("\n=== Symbols ===\n"); list_for_each_entry(sym, &kelf->symbols, list) { printf("sym %02d, type %d, bind %d, ndx %02d, name %s (%s)", sym->index, sym->type, sym->bind, sym->sym.st_shndx, sym->name, status_str(sym->status)); if (sym->sec && (sym->type == STT_FUNC || sym->type == STT_OBJECT)) printf(" -> %s", sym->sec->name); printf("\n"); } } static void kpatch_check_fentry_calls(struct kpatch_elf *kelf) { struct symbol *sym; int errs = 0; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type != STT_FUNC || sym->status != CHANGED) continue; if (!sym->twin->has_fentry_call) { log_normal("function %s has no fentry call, unable to patch\n", sym->name); errs++; } } if (errs) DIFF_FATAL("%d function(s) can not be patched", errs); } void kpatch_verify_patchability(struct kpatch_elf *kelf) { struct section *sec; int errs = 0; list_for_each_entry(sec, &kelf->sections, list) { if (sec->status == CHANGED && !sec->include) { log_normal("changed section %s not selected for inclusion\n", sec->name); errs++; } if (sec->status != SAME && sec->grouped) { log_normal("changed section %s is part of a section group\n", sec->name); errs++; } if (sec->sh.sh_type == SHT_GROUP && sec->status == NEW) { log_normal("new/changed group sections are not supported\n"); errs++; } /* * ensure we aren't including .data.* or .bss.* * (.data.unlikely is ok b/c it only has __warned vars) */ if (sec->include && sec->status != NEW && (!strncmp(sec->name, ".data", 5) || !strncmp(sec->name, ".bss", 4)) && strcmp(sec->name, ".data.unlikely")) { log_normal("data section %s selected for inclusion\n", sec->name); errs++; } } if (errs) DIFF_FATAL("%d unsupported section change(s)", errs); } #define inc_printf(fmt, ...) \ log_debug("%*s" fmt, recurselevel, "", ##__VA_ARGS__); void kpatch_include_symbol(struct symbol *sym, int recurselevel) { struct rela *rela; struct section *sec; inc_printf("start include_symbol(%s)\n", sym->name); sym->include = 1; inc_printf("symbol %s is included\n", sym->name); /* * Check if sym is a non-local symbol (sym->sec is NULL) or * if an unchanged local symbol. This a base case for the * inclusion recursion. */ if (!sym->sec || sym->sec->include || (sym->type != STT_SECTION && sym->status == SAME)) goto out; sec = sym->sec; sec->include = 1; inc_printf("section %s is included\n", sec->name); if (sec->secsym && sec->secsym != sym) { sec->secsym->include = 1; inc_printf("section symbol %s is included\n", sec->secsym->name); } if (!sec->rela) goto out; sec->rela->include = 1; inc_printf("section %s is included\n", sec->rela->name); list_for_each_entry(rela, &sec->rela->relas, list) kpatch_include_symbol(rela->sym, recurselevel+1); out: inc_printf("end include_symbol(%s)\n", sym->name); return; } void kpatch_include_standard_elements(struct kpatch_elf *kelf) { struct section *sec; list_for_each_entry(sec, &kelf->sections, list) { /* include these sections even if they haven't changed */ if (!strcmp(sec->name, ".shstrtab") || !strcmp(sec->name, ".strtab") || !strcmp(sec->name, ".symtab") || !strncmp(sec->name, ".rodata.str1.", 13)) { sec->include = 1; if (sec->secsym) sec->secsym->include = 1; } } /* include the NULL symbol */ list_entry(kelf->symbols.next, struct symbol, list)->include = 1; } int kpatch_include_hook_elements(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; struct rela *rela; int found = 0; /* include load/unload sections */ list_for_each_entry(sec, &kelf->sections, list) { if (!strcmp(sec->name, ".kpatch.hooks.load") || !strcmp(sec->name, ".kpatch.hooks.unload") || !strcmp(sec->name, ".rela.kpatch.hooks.load") || !strcmp(sec->name, ".rela.kpatch.hooks.unload")) { sec->include = 1; found = 1; if (is_rela_section(sec)) { /* include hook dependencies */ rela = list_entry(sec->relas.next, struct rela, list); sym = rela->sym; log_normal("found hook: %s\n",sym->name); kpatch_include_symbol(sym, 0); /* strip the hook symbol */ sym->include = 0; sym->sec->sym = NULL; /* use section symbol instead */ rela->sym = sym->sec->secsym; } else { sec->secsym->include = 1; } } } /* * Strip temporary global load/unload function pointer objects * used by the kpatch_[load|unload]() macros. */ list_for_each_entry(sym, &kelf->symbols, list) if (!strcmp(sym->name, "kpatch_load_data") || !strcmp(sym->name, "kpatch_unload_data")) sym->include = 0; return found; } void kpatch_include_force_elements(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; struct rela *rela; /* include force sections */ list_for_each_entry(sec, &kelf->sections, list) { if (!strcmp(sec->name, ".kpatch.force") || !strcmp(sec->name, ".rela.kpatch.force")) { sec->include = 1; if (!is_rela_section(sec)) { /* .kpatch.force */ sec->secsym->include = 1; continue; } /* .rela.kpatch.force */ list_for_each_entry(rela, &sec->relas, list) log_normal("function '%s' marked with KPATCH_FORCE_UNSAFE!\n", rela->sym->name); } } /* strip temporary global kpatch_force_func_* symbols */ list_for_each_entry(sym, &kelf->symbols, list) if (!strncmp(sym->name, "__kpatch_force_func_", strlen("__kpatch_force_func_"))) sym->include = 0; } int kpatch_include_new_globals(struct kpatch_elf *kelf) { struct symbol *sym; int nr = 0; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->bind == STB_GLOBAL && sym->sec && sym->status == NEW) { kpatch_include_symbol(sym, 0); nr++; } } return nr; } int kpatch_include_changed_functions(struct kpatch_elf *kelf) { struct symbol *sym; int changed_nr = 0; log_debug("\n=== Inclusion Tree ===\n"); list_for_each_entry(sym, &kelf->symbols, list) { if (sym->status == CHANGED && sym->type == STT_FUNC) { changed_nr++; kpatch_include_symbol(sym, 0); } if (sym->type == STT_FILE) sym->include = 1; } return changed_nr; } void kpatch_print_changes(struct kpatch_elf *kelf) { struct symbol *sym; list_for_each_entry(sym, &kelf->symbols, list) { if (!sym->include || !sym->sec || sym->type != STT_FUNC) continue; if (sym->status == NEW) log_normal("new function: %s\n", sym->name); else if (sym->status == CHANGED) log_normal("changed function: %s\n", sym->name); } } void kpatch_migrate_symbols(struct list_head *src, struct list_head *dst, int (*select)(struct symbol *)) { struct symbol *sym, *safe; list_for_each_entry_safe(sym, safe, src, list) { if (select && !select(sym)) continue; list_del(&sym->list); list_add_tail(&sym->list, dst); } } int is_null_sym(struct symbol *sym) { return !strlen(sym->name); } int is_file_sym(struct symbol *sym) { return sym->type == STT_FILE; } int is_local_func_sym(struct symbol *sym) { return sym->bind == STB_LOCAL && sym->type == STT_FUNC; } int is_local_sym(struct symbol *sym) { return sym->bind == STB_LOCAL; } void kpatch_migrate_included_elements(struct kpatch_elf *kelf, struct kpatch_elf **kelfout) { struct section *sec, *safesec; struct symbol *sym, *safesym; struct kpatch_elf *out; /* allocate output kelf */ out = malloc(sizeof(*out)); if (!out) ERROR("malloc"); memset(out, 0, sizeof(*out)); INIT_LIST_HEAD(&out->sections); INIT_LIST_HEAD(&out->symbols); INIT_LIST_HEAD(&out->strings); /* migrate included sections from kelf to out */ list_for_each_entry_safe(sec, safesec, &kelf->sections, list) { if (!sec->include) continue; list_del(&sec->list); list_add_tail(&sec->list, &out->sections); sec->index = 0; if (!is_rela_section(sec) && sec->secsym && !sec->secsym->include) /* break link to non-included section symbol */ sec->secsym = NULL; } /* migrate included symbols from kelf to out */ list_for_each_entry_safe(sym, safesym, &kelf->symbols, list) { if (!sym->include) continue; list_del(&sym->list); list_add_tail(&sym->list, &out->symbols); sym->index = 0; sym->strip = 0; if (sym->sec && !sym->sec->include) /* break link to non-included section */ sym->sec = NULL; } *kelfout = out; } void kpatch_reorder_symbols(struct kpatch_elf *kelf) { LIST_HEAD(symbols); /* migrate NULL sym */ kpatch_migrate_symbols(&kelf->symbols, &symbols, is_null_sym); /* migrate LOCAL FILE sym */ kpatch_migrate_symbols(&kelf->symbols, &symbols, is_file_sym); /* migrate LOCAL FUNC syms */ kpatch_migrate_symbols(&kelf->symbols, &symbols, is_local_func_sym); /* migrate all other LOCAL syms */ kpatch_migrate_symbols(&kelf->symbols, &symbols, is_local_sym); /* migrate all other (GLOBAL) syms */ kpatch_migrate_symbols(&kelf->symbols, &symbols, NULL); list_replace(&symbols, &kelf->symbols); } void kpatch_reindex_elements(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; int index; index = 1; /* elf write function handles NULL section 0 */ list_for_each_entry(sec, &kelf->sections, list) sec->index = index++; index = 0; list_for_each_entry(sym, &kelf->symbols, list) { sym->index = index++; if (sym->sec) sym->sym.st_shndx = sym->sec->index; else if (sym->sym.st_shndx != SHN_ABS) sym->sym.st_shndx = SHN_UNDEF; } } int bug_table_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("BUG_STRUCT_SIZE"); if (!str) ERROR("BUG_STRUCT_SIZE not set"); size = atoi(str); } return size; } int parainstructions_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("PARA_STRUCT_SIZE"); if (!str) ERROR("PARA_STRUCT_SIZE not set"); size = atoi(str); } return size; } int ex_table_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("EX_STRUCT_SIZE"); if (!str) ERROR("EX_STRUCT_SIZE not set"); size = atoi(str); } return size; } int altinstructions_group_size(struct kpatch_elf *kelf, int offset) { static int size = 0; char *str; if (!size) { str = getenv("ALT_STRUCT_SIZE"); if (!str) ERROR("ALT_STRUCT_SIZE not set"); size = atoi(str); } return size; } int smp_locks_group_size(struct kpatch_elf *kelf, int offset) { return 4; } /* * The rela groups in the .fixup section vary in size. The beginning of each * .fixup rela group is referenced by the __ex_table section. To find the size * of a .fixup rela group, we have to traverse the __ex_table relas. */ int fixup_group_size(struct kpatch_elf *kelf, int offset) { struct section *sec; struct rela *rela; int found; sec = find_section_by_name(&kelf->sections, ".rela__ex_table"); if (!sec) ERROR("missing .rela__ex_table section"); /* find beginning of this group */ found = 0; list_for_each_entry(rela, &sec->relas, list) { if (!strcmp(rela->sym->name, ".fixup") && rela->addend == offset) { found = 1; break; } } if (!found) ERROR("can't find .fixup rela group at offset %d\n", offset); /* find beginning of next group */ found = 0; list_for_each_entry_continue(rela, &sec->relas, list) { if (!strcmp(rela->sym->name, ".fixup") && rela->addend > offset) { found = 1; break; } } if (!found) { /* last group */ struct section *fixupsec; fixupsec = find_section_by_name(&kelf->sections, ".fixup"); return fixupsec->sh.sh_size - offset; } return rela->addend - offset; } struct special_section special_sections[] = { { .name = "__bug_table", .group_size = bug_table_group_size, }, { .name = ".smp_locks", .group_size = smp_locks_group_size, }, { .name = ".parainstructions", .group_size = parainstructions_group_size, }, { .name = ".fixup", .group_size = fixup_group_size, }, { .name = "__ex_table", /* must come after .fixup */ .group_size = ex_table_group_size, }, { .name = ".altinstructions", .group_size = altinstructions_group_size, }, {}, }; int should_keep_rela_group(struct section *sec, int start, int size) { struct rela *rela; int found = 0; /* check if any relas in the group reference any changed functions */ list_for_each_entry(rela, &sec->relas, list) { if (rela->offset >= start && rela->offset < start + size && rela->sym->type == STT_FUNC && rela->sym->sec->include) { found = 1; log_debug("new/changed symbol %s found in special section %s\n", rela->sym->name, sec->name); } } return found; } void kpatch_regenerate_special_section(struct kpatch_elf *kelf, struct special_section *special, struct section *sec) { struct rela *rela, *safe; char *src, *dest; int group_size, src_offset, dest_offset, include, align, aligned_size; LIST_HEAD(newrelas); src = sec->base->data->d_buf; /* alloc buffer for new base section */ dest = malloc(sec->base->sh.sh_size); if (!dest) ERROR("malloc"); group_size = 0; src_offset = 0; dest_offset = 0; for ( ; src_offset < sec->base->sh.sh_size; src_offset += group_size) { group_size = special->group_size(kelf, src_offset); include = should_keep_rela_group(sec, src_offset, group_size); if (!include) continue; /* * Copy all relas in the group. It's possible that the relas * aren't sorted (e.g. .rela.fixup), so go through the entire * rela list each time. */ list_for_each_entry_safe(rela, safe, &sec->relas, list) { if (rela->offset >= src_offset && rela->offset < src_offset + group_size) { /* copy rela entry */ list_del(&rela->list); list_add_tail(&rela->list, &newrelas); rela->offset -= src_offset - dest_offset; rela->rela.r_offset = rela->offset; rela->sym->include = 1; } } /* copy base section group */ memcpy(dest + dest_offset, src + src_offset, group_size); dest_offset += group_size; } /* verify that group_size is a divisor of aligned section size */ align = sec->base->sh.sh_addralign; aligned_size = ((sec->base->sh.sh_size + align - 1) / align) * align; if (src_offset != aligned_size) ERROR("group size mismatch for section %s\n", sec->base->name); if (!dest_offset) { /* no changed or global functions referenced */ sec->status = sec->base->status = SAME; sec->include = sec->base->include = 0; free(dest); return; } /* overwrite with new relas list */ list_replace(&newrelas, &sec->relas); /* include both rela and base sections */ sec->include = 1; sec->base->include = 1; /* * Update text section data buf and size. * * The rela section's data buf and size will be regenerated in * kpatch_rebuild_rela_section_data(). */ sec->base->data->d_buf = dest; sec->base->data->d_size = dest_offset; } void kpatch_include_debug_sections(struct kpatch_elf *kelf) { struct section *sec; struct rela *rela, *saferela; /* include all .debug_* sections */ list_for_each_entry(sec, &kelf->sections, list) { if (is_debug_section(sec)) { sec->include = 1; if (!is_rela_section(sec)) sec->secsym->include = 1; } } /* * Go through the .rela.debug_ sections and strip entries * referencing unchanged symbols */ list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec) || !is_debug_section(sec)) continue; list_for_each_entry_safe(rela, saferela, &sec->relas, list) if (!rela->sym->sec->include) list_del(&rela->list); } } void kpatch_mark_ignored_sections(struct kpatch_elf *kelf) { struct section *sec, *strsec, *ignoresec; struct rela *rela; char *name; sec = find_section_by_name(&kelf->sections, ".kpatch.ignore.sections"); if (!sec) return; list_for_each_entry(rela, &sec->rela->relas, list) { strsec = rela->sym->sec; strsec->status = CHANGED; /* * Include the string section here. This is because the * KPATCH_IGNORE_SECTION() macro is passed a literal string * by the patch author, resulting in a change to the string * section. If we don't include it, then we will potentially * get a "changed section not included" error in * kpatch_verify_patchability() if no other function based change * also changes the string section. We could try to exclude each * literal string added to the section by KPATCH_IGNORE_SECTION() * from the section data comparison, but this is a simpler way. */ strsec->include = 1; name = strsec->data->d_buf + rela->addend; ignoresec = find_section_by_name(&kelf->sections, name); if (!ignoresec) ERROR("KPATCH_IGNORE_SECTION: can't find %s", name); log_normal("ignoring section: %s\n", name); if (is_rela_section(ignoresec)) ignoresec = ignoresec->base; ignoresec->ignore = 1; if (ignoresec->twin) ignoresec->twin->ignore = 1; } } void kpatch_mark_ignored_sections_same(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; list_for_each_entry(sec, &kelf->sections, list) { if (!sec->ignore) continue; sec->status = SAME; if (sec->secsym) sec->secsym->status = SAME; if (sec->rela) sec->rela->status = SAME; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->sec != sec) continue; sym->status = SAME; } } } void kpatch_mark_ignored_functions_same(struct kpatch_elf *kelf) { struct section *sec; struct rela *rela; sec = find_section_by_name(&kelf->sections, ".kpatch.ignore.functions"); if (!sec) return; list_for_each_entry(rela, &sec->rela->relas, list) { if (!rela->sym->sec) ERROR("expected bundled symbol"); if (rela->sym->type != STT_FUNC) ERROR("expected function symbol"); log_normal("ignoring function: %s\n", rela->sym->name); if (rela->sym->status != CHANGED) log_normal("NOTICE: no change detected in function %s, unnecessary KPATCH_IGNORE_FUNCTION()?\n", rela->sym->name); rela->sym->status = SAME; rela->sym->sec->status = SAME; if (rela->sym->sec->secsym) rela->sym->sec->secsym->status = SAME; if (rela->sym->sec->rela) rela->sym->sec->rela->status = SAME; } } void kpatch_process_special_sections(struct kpatch_elf *kelf) { struct special_section *special; struct section *sec; struct symbol *sym; struct rela *rela; for (special = special_sections; special->name; special++) { sec = find_section_by_name(&kelf->sections, special->name); if (!sec) continue; sec = sec->rela; if (!sec) continue; kpatch_regenerate_special_section(kelf, special, sec); } /* * The following special sections don't have relas which reference * non-included symbols, so their entire rela section can be included. */ list_for_each_entry(sec, &kelf->sections, list) { if (strcmp(sec->name, ".altinstr_replacement")) continue; /* include base section */ sec->include = 1; /* include all symbols in the section */ list_for_each_entry(sym, &kelf->symbols, list) if (sym->sec == sec) sym->include = 1; /* include rela section */ if (sec->rela) { sec->rela->include = 1; /* include all symbols referenced by relas */ list_for_each_entry(rela, &sec->rela->relas, list) rela->sym->include = 1; } } /* * The following special sections aren't supported, so make sure we * don't ever try to include them. Otherwise the kernel will see the * jump table during module loading and get confused. Generally it * should be safe to exclude them, it just means that you can't modify * jump labels and enable tracepoints in a patched function. */ list_for_each_entry(sec, &kelf->sections, list) { if (strcmp(sec->name, "__jump_table") && strcmp(sec->name, "__tracepoints") && strcmp(sec->name, "__tracepoints_ptrs") && strcmp(sec->name, "__tracepoints_strings")) continue; sec->status = SAME; sec->include = 0; if (sec->rela) { sec->rela->status = SAME; sec->rela->include = 0; } } } void print_strtab(char *buf, size_t size) { int i; for (i = 0; i < size; i++) { if (buf[i] == 0) printf("\\0"); else printf("%c",buf[i]); } } void kpatch_create_shstrtab(struct kpatch_elf *kelf) { struct section *shstrtab, *sec; size_t size, offset, len; char *buf; shstrtab = find_section_by_name(&kelf->sections, ".shstrtab"); if (!shstrtab) ERROR("find_section_by_name"); /* determine size of string table */ size = 1; /* for initial NULL terminator */ list_for_each_entry(sec, &kelf->sections, list) size += strlen(sec->name) + 1; /* include NULL terminator */ /* allocate data buffer */ buf = malloc(size); if (!buf) ERROR("malloc"); memset(buf, 0, size); /* populate string table and link with section header */ offset = 1; list_for_each_entry(sec, &kelf->sections, list) { len = strlen(sec->name) + 1; sec->sh.sh_name = offset; memcpy(buf + offset, sec->name, len); offset += len; } if (offset != size) ERROR("shstrtab size mismatch"); shstrtab->data->d_buf = buf; shstrtab->data->d_size = size; if (loglevel <= DEBUG) { printf("shstrtab: "); print_strtab(buf, size); printf("\n"); list_for_each_entry(sec, &kelf->sections, list) printf("%s @ shstrtab offset %d\n", sec->name, sec->sh.sh_name); } } void kpatch_create_strtab(struct kpatch_elf *kelf) { struct section *strtab; struct symbol *sym; size_t size = 0, offset = 0, len; char *buf; strtab = find_section_by_name(&kelf->sections, ".strtab"); if (!strtab) ERROR("find_section_by_name"); /* determine size of string table */ list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type == STT_SECTION) continue; size += strlen(sym->name) + 1; /* include NULL terminator */ } /* allocate data buffer */ buf = malloc(size); if (!buf) ERROR("malloc"); memset(buf, 0, size); /* populate string table and link with section header */ list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type == STT_SECTION) { sym->sym.st_name = 0; continue; } len = strlen(sym->name) + 1; sym->sym.st_name = offset; memcpy(buf + offset, sym->name, len); offset += len; } if (offset != size) ERROR("shstrtab size mismatch"); strtab->data->d_buf = buf; strtab->data->d_size = size; if (loglevel <= DEBUG) { printf("strtab: "); print_strtab(buf, size); printf("\n"); list_for_each_entry(sym, &kelf->symbols, list) printf("%s @ strtab offset %d\n", sym->name, sym->sym.st_name); } } void kpatch_create_symtab(struct kpatch_elf *kelf) { struct section *symtab; struct symbol *sym; char *buf; size_t size; int nr = 0, offset = 0, nr_local = 0; symtab = find_section_by_name(&kelf->sections, ".symtab"); if (!symtab) ERROR("find_section_by_name"); /* count symbols */ list_for_each_entry(sym, &kelf->symbols, list) nr++; /* create new symtab buffer */ size = nr * symtab->sh.sh_entsize; buf = malloc(size); if (!buf) ERROR("malloc"); memset(buf, 0, size); offset = 0; list_for_each_entry(sym, &kelf->symbols, list) { memcpy(buf + offset, &sym->sym, symtab->sh.sh_entsize); offset += symtab->sh.sh_entsize; if (is_local_sym(sym)) nr_local++; } symtab->data->d_buf = buf; symtab->data->d_size = size; /* update symtab section header */ symtab->sh.sh_link = find_section_by_name(&kelf->sections, ".strtab")->index; symtab->sh.sh_info = nr_local; } struct section *create_section_pair(struct kpatch_elf *kelf, char *name, int entsize, int nr) { char *relaname; struct section *sec, *relasec; int size = entsize * nr; relaname = malloc(strlen(name) + strlen(".rela") + 1); if (!relaname) ERROR("malloc"); strcpy(relaname, ".rela"); strcat(relaname, name); /* allocate text section resources */ ALLOC_LINK(sec, &kelf->sections); sec->name = name; /* set data */ sec->data = malloc(sizeof(*sec->data)); if (!sec->data) ERROR("malloc"); sec->data->d_buf = malloc(size); if (!sec->data->d_buf) ERROR("malloc"); sec->data->d_size = size; sec->data->d_type = ELF_T_BYTE; /* set section header */ sec->sh.sh_type = SHT_PROGBITS; sec->sh.sh_entsize = entsize; sec->sh.sh_addralign = 8; sec->sh.sh_flags = SHF_ALLOC; sec->sh.sh_size = size; /* allocate rela section resources */ ALLOC_LINK(relasec, &kelf->sections); relasec->name = relaname; relasec->base = sec; INIT_LIST_HEAD(&relasec->relas); /* set data, buffers generated by kpatch_rebuild_rela_section_data() */ relasec->data = malloc(sizeof(*relasec->data)); if (!relasec->data) ERROR("malloc"); /* set section header */ relasec->sh.sh_type = SHT_RELA; relasec->sh.sh_entsize = sizeof(GElf_Rela); relasec->sh.sh_addralign = 8; /* set text rela section pointer */ sec->rela = relasec; return sec; } void kpatch_create_patches_sections(struct kpatch_elf *kelf, struct lookup_table *table, char *hint, char *objname) { int nr, index, objname_offset; struct section *sec, *relasec; struct symbol *sym, *strsym; struct rela *rela; struct lookup_result result; struct kpatch_patch_func *funcs; /* count patched functions */ nr = 0; list_for_each_entry(sym, &kelf->symbols, list) if (sym->type == STT_FUNC && sym->status == CHANGED) nr++; /* create text/rela section pair */ sec = create_section_pair(kelf, ".kpatch.funcs", sizeof(*funcs), nr); relasec = sec->rela; funcs = sec->data->d_buf; /* lookup strings symbol */ strsym = find_symbol_by_name(&kelf->symbols, ".kpatch.strings"); if (!strsym) ERROR("can't find .kpatch.strings symbol"); /* add objname to strings */ objname_offset = offset_of_string(&kelf->strings, objname); /* populate sections */ index = 0; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type == STT_FUNC && sym->status == CHANGED) { if (sym->bind == STB_LOCAL) { if (lookup_local_symbol(table, sym->name, hint, &result)) ERROR("lookup_local_symbol %s (%s)", sym->name, hint); } else { if(lookup_global_symbol(table, sym->name, &result)) ERROR("lookup_global_symbol %s", sym->name); } log_debug("lookup for %s @ 0x%016lx len %lu\n", sym->name, result.value, result.size); /* add entry in text section */ funcs[index].old_addr = result.value; funcs[index].old_size = result.size; funcs[index].new_size = sym->sym.st_size; funcs[index].sympos = result.pos; /* * Add a relocation that will populate * the funcs[index].new_addr field at * module load time. */ ALLOC_LINK(rela, &relasec->relas); rela->sym = sym; rela->type = R_X86_64_64; rela->addend = 0; rela->offset = index * sizeof(*funcs); /* * Add a relocation that will populate * the funcs[index].name field. */ ALLOC_LINK(rela, &relasec->relas); rela->sym = strsym; rela->type = R_X86_64_64; rela->addend = offset_of_string(&kelf->strings, sym->name); rela->offset = index * sizeof(*funcs) + offsetof(struct kpatch_patch_func, name); /* * Add a relocation that will populate * the funcs[index].objname field. */ ALLOC_LINK(rela, &relasec->relas); rela->sym = strsym; rela->type = R_X86_64_64; rela->addend = objname_offset; rela->offset = index * sizeof(*funcs) + offsetof(struct kpatch_patch_func,objname); index++; } } /* sanity check, index should equal nr */ if (index != nr) ERROR("size mismatch in funcs sections"); } static int kpatch_is_core_module_symbol(char *name) { return (!strcmp(name, "kpatch_shadow_alloc") || !strcmp(name, "kpatch_shadow_free") || !strcmp(name, "kpatch_shadow_get")); } void kpatch_create_dynamic_rela_sections(struct kpatch_elf *kelf, struct lookup_table *table, char *hint, char *objname) { int nr, index, objname_offset; struct section *sec, *dynsec, *relasec; struct rela *rela, *dynrela, *safe; struct symbol *strsym; struct lookup_result result; struct kpatch_patch_dynrela *dynrelas; int vmlinux, external; vmlinux = !strcmp(objname, "vmlinux"); /* count rela entries that need to be dynamic */ nr = 0; list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec)) continue; if (!strcmp(sec->name, ".rela.kpatch.funcs")) continue; list_for_each_entry(rela, &sec->relas, list) nr++; /* upper bound on number of dynrelas */ } /* create text/rela section pair */ dynsec = create_section_pair(kelf, ".kpatch.dynrelas", sizeof(*dynrelas), nr); relasec = dynsec->rela; dynrelas = dynsec->data->d_buf; /* lookup strings symbol */ strsym = find_symbol_by_name(&kelf->symbols, ".kpatch.strings"); if (!strsym) ERROR("can't find .kpatch.strings symbol"); /* add objname to strings */ objname_offset = offset_of_string(&kelf->strings, objname); /* populate sections */ index = 0; list_for_each_entry(sec, &kelf->sections, list) { if (!is_rela_section(sec)) continue; if (!strcmp(sec->name, ".rela.kpatch.funcs") || !strcmp(sec->name, ".rela.kpatch.dynrelas")) continue; list_for_each_entry_safe(rela, safe, &sec->relas, list) { if (rela->sym->sec) continue; /* * Allow references to core module symbols to remain as * normal relas, since the core module may not be * compiled into the kernel, and they should be * exported anyway. */ if (kpatch_is_core_module_symbol(rela->sym->name)) continue; external = 0; if (rela->sym->bind == STB_LOCAL) { /* An unchanged local symbol */ if (lookup_local_symbol(table, rela->sym->name, hint, &result)) ERROR("lookup_local_symbol %s (%s) needed for %s", rela->sym->name, hint, sec->base->name); } else if (vmlinux) { /* * We have a patch to vmlinux which references * a global symbol. Use a normal rela for * exported symbols and a dynrela otherwise. */ if (lookup_is_exported_symbol(table, rela->sym->name)) continue; /* * If lookup_global_symbol() fails, assume the * symbol is defined in another object in the * patch module. */ if (lookup_global_symbol(table, rela->sym->name, &result)) continue; } else { /* * We have a patch to a module which references * a global symbol. */ /* * __fentry__ relas can't be converted to * dynrelas because the ftrace module init code * runs before the dynrela code can initialize * them. __fentry__ is exported by the kernel, * so leave it as a normal rela. */ if (!strcmp(rela->sym->name, "__fentry__")) continue; /* * Try to find the symbol in the module being * patched. */ if (lookup_global_symbol(table, rela->sym->name, &result)) /* * Not there, assume it's either an * exported symbol or provided by * another .o in the patch module. */ external = 1; } log_debug("lookup for %s @ 0x%016lx len %lu\n", rela->sym->name, result.value, result.size); /* dest filed in by rela entry below */ if (vmlinux) dynrelas[index].src = result.value; else /* for modules, src is discovered at runtime */ dynrelas[index].src = 0; dynrelas[index].addend = rela->addend; dynrelas[index].type = rela->type; dynrelas[index].external = external; dynrelas[index].sympos = result.pos; /* add rela to fill in dest field */ ALLOC_LINK(dynrela, &relasec->relas); if (sec->base->sym) dynrela->sym = sec->base->sym; else dynrela->sym = sec->base->secsym; dynrela->type = R_X86_64_64; dynrela->addend = rela->offset; dynrela->offset = index * sizeof(*dynrelas); /* add rela to fill in name field */ ALLOC_LINK(dynrela, &relasec->relas); dynrela->sym = strsym; dynrela->type = R_X86_64_64; dynrela->addend = offset_of_string(&kelf->strings, rela->sym->name); dynrela->offset = index * sizeof(*dynrelas) + offsetof(struct kpatch_patch_dynrela, name); /* add rela to fill in objname field */ ALLOC_LINK(dynrela, &relasec->relas); dynrela->sym = strsym; dynrela->type = R_X86_64_64; dynrela->addend = objname_offset; dynrela->offset = index * sizeof(*dynrelas) + offsetof(struct kpatch_patch_dynrela, objname); rela->sym->strip = 1; list_del(&rela->list); free(rela); index++; } } /* set size to actual number of dynrelas */ dynsec->data->d_size = index * sizeof(struct kpatch_patch_dynrela); dynsec->sh.sh_size = dynsec->data->d_size; } void kpatch_create_hooks_objname_rela(struct kpatch_elf *kelf, char *objname) { struct section *sec; struct rela *rela; struct symbol *strsym; int objname_offset; /* lookup strings symbol */ strsym = find_symbol_by_name(&kelf->symbols, ".kpatch.strings"); if (!strsym) ERROR("can't find .kpatch.strings symbol"); /* add objname to strings */ objname_offset = offset_of_string(&kelf->strings, objname); list_for_each_entry(sec, &kelf->sections, list) { if (strcmp(sec->name, ".rela.kpatch.hooks.load") && strcmp(sec->name, ".rela.kpatch.hooks.unload")) continue; ALLOC_LINK(rela, &sec->relas); rela->sym = strsym; rela->type = R_X86_64_64; rela->addend = objname_offset; rela->offset = offsetof(struct kpatch_patch_hook, objname); } } /* * This function basically reimplements the functionality of the Linux * recordmcount script, so that patched functions can be recognized by ftrace. * * TODO: Eventually we can modify recordmount so that it recognizes our bundled * sections as valid and does this work for us. */ void kpatch_create_mcount_sections(struct kpatch_elf *kelf) { int nr, index; struct section *sec, *relasec; struct symbol *sym; struct rela *rela; void **funcs, *newdata; unsigned char *insn; nr = 0; list_for_each_entry(sym, &kelf->symbols, list) if (sym->type == STT_FUNC && sym->status != SAME && sym->has_fentry_call) nr++; /* create text/rela section pair */ sec = create_section_pair(kelf, "__mcount_loc", sizeof(*funcs), nr); relasec = sec->rela; funcs = sec->data->d_buf; /* populate sections */ index = 0; list_for_each_entry(sym, &kelf->symbols, list) { if (sym->type != STT_FUNC || sym->status == SAME) continue; if (!sym->has_fentry_call) { log_debug("function %s has no fentry call, no mcount record is needed\n", sym->name); continue; } /* add rela in .rela__mcount_loc to fill in function pointer */ ALLOC_LINK(rela, &relasec->relas); rela->sym = sym; rela->type = R_X86_64_64; rela->addend = 0; rela->offset = index * sizeof(*funcs); /* * Modify the first instruction of the function to "callq * __fentry__" so that ftrace will be happy. */ newdata = malloc(sym->sec->data->d_size); memcpy(newdata, sym->sec->data->d_buf, sym->sec->data->d_size); sym->sec->data->d_buf = newdata; insn = newdata; if (insn[0] != 0xf) ERROR("%s: unexpected instruction at the start of the function", sym->name); insn[0] = 0xe8; insn[1] = 0; insn[2] = 0; insn[3] = 0; insn[4] = 0; rela = list_first_entry(&sym->sec->rela->relas, struct rela, list); rela->type = R_X86_64_PC32; index++; } /* sanity check, index should equal nr */ if (index != nr) ERROR("size mismatch in funcs sections"); } /* * This function strips out symbols that were referenced by changed rela * sections, but the rela entries that referenced them were converted to * dynrelas and are no longer needed. */ void kpatch_strip_unneeded_syms(struct kpatch_elf *kelf, struct lookup_table *table) { struct symbol *sym, *safe; list_for_each_entry_safe(sym, safe, &kelf->symbols, list) { if (sym->strip) { list_del(&sym->list); free(sym); } } } void kpatch_create_strings_elements(struct kpatch_elf *kelf) { struct section *sec; struct symbol *sym; /* create .kpatch.strings */ /* allocate section resources */ ALLOC_LINK(sec, &kelf->sections); sec->name = ".kpatch.strings"; /* set data */ sec->data = malloc(sizeof(*sec->data)); if (!sec->data) ERROR("malloc"); sec->data->d_type = ELF_T_BYTE; /* set section header */ sec->sh.sh_type = SHT_PROGBITS; sec->sh.sh_entsize = 1; sec->sh.sh_addralign = 1; sec->sh.sh_flags = SHF_ALLOC; /* create .kpatch.strings section symbol (reuse sym variable) */ ALLOC_LINK(sym, &kelf->symbols); sym->sec = sec; sym->sym.st_info = GELF_ST_INFO(STB_LOCAL, STT_SECTION); sym->type = STT_SECTION; sym->bind = STB_LOCAL; sym->name = ".kpatch.strings"; } void kpatch_build_strings_section_data(struct kpatch_elf *kelf) { struct string *string; struct section *sec; int size; char *strtab; sec = find_section_by_name(&kelf->sections, ".kpatch.strings"); if (!sec) ERROR("can't find .kpatch.strings"); /* determine size */ size = 0; list_for_each_entry(string, &kelf->strings, list) size += strlen(string->name) + 1; /* allocate section resources */ strtab = malloc(size); if (!strtab) ERROR("malloc"); sec->data->d_buf = strtab; sec->data->d_size = size; /* populate strings section data */ list_for_each_entry(string, &kelf->strings, list) { strcpy(strtab, string->name); strtab += strlen(string->name) + 1; } } void kpatch_rebuild_rela_section_data(struct section *sec) { struct rela *rela; int nr = 0, index = 0, size; GElf_Rela *relas; list_for_each_entry(rela, &sec->relas, list) nr++; size = nr * sizeof(*relas); relas = malloc(size); if (!relas) ERROR("malloc"); sec->data->d_buf = relas; sec->data->d_size = size; /* d_type remains ELF_T_RELA */ sec->sh.sh_size = size; list_for_each_entry(rela, &sec->relas, list) { relas[index].r_offset = rela->offset; relas[index].r_addend = rela->addend; relas[index].r_info = GELF_R_INFO(rela->sym->index, rela->type); index++; } /* sanity check, index should equal nr */ if (index != nr) ERROR("size mismatch in rebuilt rela section"); } void kpatch_write_output_elf(struct kpatch_elf *kelf, Elf *elf, char *outfile) { int fd; struct section *sec; Elf *elfout; GElf_Ehdr eh, ehout; Elf_Scn *scn; Elf_Data *data; GElf_Shdr sh; /* TODO make this argv */ fd = creat(outfile, 0777); if (fd == -1) ERROR("creat"); elfout = elf_begin(fd, ELF_C_WRITE, NULL); if (!elfout) ERROR("elf_begin"); if (!gelf_newehdr(elfout, gelf_getclass(kelf->elf))) ERROR("gelf_newehdr"); if (!gelf_getehdr(elfout, &ehout)) ERROR("gelf_getehdr"); if (!gelf_getehdr(elf, &eh)) ERROR("gelf_getehdr"); memset(&ehout, 0, sizeof(ehout)); ehout.e_ident[EI_DATA] = eh.e_ident[EI_DATA]; ehout.e_machine = eh.e_machine; ehout.e_type = eh.e_type; ehout.e_version = EV_CURRENT; ehout.e_shstrndx = find_section_by_name(&kelf->sections, ".shstrtab")->index; /* add changed sections */ list_for_each_entry(sec, &kelf->sections, list) { scn = elf_newscn(elfout); if (!scn) ERROR("elf_newscn"); data = elf_newdata(scn); if (!data) ERROR("elf_newdata"); if (!elf_flagdata(data, ELF_C_SET, ELF_F_DIRTY)) ERROR("elf_flagdata"); data->d_type = sec->data->d_type; data->d_buf = sec->data->d_buf; data->d_size = sec->data->d_size; if(!gelf_getshdr(scn, &sh)) ERROR("gelf_getshdr"); sh = sec->sh; if (!gelf_update_shdr(scn, &sh)) ERROR("gelf_update_shdr"); } if (!gelf_update_ehdr(elfout, &ehout)) ERROR("gelf_update_ehdr"); if (elf_update(elfout, ELF_C_WRITE) < 0) { printf("%s\n",elf_errmsg(-1)); ERROR("elf_update"); } } struct arguments { char *args[4]; int debug; }; static char args_doc[] = "original.o patched.o kernel-object output.o"; static struct argp_option options[] = { {"debug", 'd', 0, 0, "Show debug output" }, { 0 } }; static error_t parse_opt (int key, char *arg, struct argp_state *state) { /* Get the input argument from argp_parse, which we know is a pointer to our arguments structure. */ struct arguments *arguments = state->input; switch (key) { case 'd': arguments->debug = 1; break; case ARGP_KEY_ARG: if (state->arg_num >= 4) /* Too many arguments. */ argp_usage (state); arguments->args[state->arg_num] = arg; break; case ARGP_KEY_END: if (state->arg_num < 4) /* Not enough arguments. */ argp_usage (state); break; default: return ARGP_ERR_UNKNOWN; } return 0; } static struct argp argp = { options, parse_opt, args_doc, 0 }; /* * While this is a one-shot program without a lot of proper cleanup in case * of an error, this function serves a debugging purpose: to break down and * zero data structures we shouldn't be accessing anymore. This should * help cause an immediate and obvious issue when a logic error leads to * accessing data that is not intended to be accessed past a particular point. */ void kpatch_elf_teardown(struct kpatch_elf *kelf) { struct section *sec, *safesec; struct symbol *sym, *safesym; struct rela *rela, *saferela; list_for_each_entry_safe(sec, safesec, &kelf->sections, list) { if (is_rela_section(sec)) { list_for_each_entry_safe(rela, saferela, &sec->relas, list) { memset(rela, 0, sizeof(*rela)); free(rela); } memset(sec, 0, sizeof(*sec)); free(sec); } } list_for_each_entry_safe(sym, safesym, &kelf->symbols, list) { memset(sym, 0, sizeof(*sym)); free(sym); } INIT_LIST_HEAD(&kelf->sections); INIT_LIST_HEAD(&kelf->symbols); } void kpatch_elf_free(struct kpatch_elf *kelf) { elf_end(kelf->elf); close(kelf->fd); memset(kelf, 0, sizeof(*kelf)); free(kelf); } int main(int argc, char *argv[]) { struct kpatch_elf *kelf_base, *kelf_patched, *kelf_out; struct arguments arguments; int num_changed, hooks_exist, new_globals_exist; struct lookup_table *lookup; struct section *sec, *symtab; struct symbol *sym; char *hint = NULL, *name, *pos; arguments.debug = 0; argp_parse (&argp, argc, argv, 0, 0, &arguments); if (arguments.debug) loglevel = DEBUG; elf_version(EV_CURRENT); childobj = basename(arguments.args[0]); kelf_base = kpatch_elf_open(arguments.args[0]); kelf_patched = kpatch_elf_open(arguments.args[1]); kpatch_compare_elf_headers(kelf_base->elf, kelf_patched->elf); kpatch_check_program_headers(kelf_base->elf); kpatch_check_program_headers(kelf_patched->elf); kpatch_mark_grouped_sections(kelf_patched); kpatch_replace_sections_syms(kelf_base); kpatch_replace_sections_syms(kelf_patched); kpatch_rename_mangled_functions(kelf_base, kelf_patched); kpatch_correlate_elfs(kelf_base, kelf_patched); kpatch_correlate_static_local_variables(kelf_base, kelf_patched); /* * After this point, we don't care about kelf_base anymore. * We access its sections via the twin pointers in the * section, symbol, and rela lists of kelf_patched. */ kpatch_mark_ignored_sections(kelf_patched); kpatch_compare_correlated_elements(kelf_patched); kpatch_check_fentry_calls(kelf_patched); kpatch_elf_teardown(kelf_base); kpatch_elf_free(kelf_base); kpatch_mark_ignored_functions_same(kelf_patched); kpatch_mark_ignored_sections_same(kelf_patched); kpatch_include_standard_elements(kelf_patched); num_changed = kpatch_include_changed_functions(kelf_patched); kpatch_include_debug_sections(kelf_patched); hooks_exist = kpatch_include_hook_elements(kelf_patched); kpatch_include_force_elements(kelf_patched); new_globals_exist = kpatch_include_new_globals(kelf_patched); kpatch_print_changes(kelf_patched); kpatch_dump_kelf(kelf_patched); kpatch_process_special_sections(kelf_patched); kpatch_verify_patchability(kelf_patched); if (!num_changed && !new_globals_exist) { if (hooks_exist) log_debug("no changed functions were found, but hooks exist\n"); else { log_debug("no changed functions were found\n"); return 3; /* 1 is ERROR, 2 is DIFF_FATAL */ } } /* this is destructive to kelf_patched */ kpatch_migrate_included_elements(kelf_patched, &kelf_out); /* * Teardown kelf_patched since we shouldn't access sections or symbols * through it anymore. Don't free however, since our section and symbol * name fields still point to strings in the Elf object owned by * kpatch_patched. */ kpatch_elf_teardown(kelf_patched); list_for_each_entry(sym, &kelf_out->symbols, list) { if (sym->type == STT_FILE) { hint = sym->name; break; } } if (!hint) ERROR("FILE symbol not found in output. Stripped?\n"); /* create symbol lookup table */ lookup = lookup_open(arguments.args[2]); /* extract module name (destructive to arguments.modulefile) */ name = basename(arguments.args[2]); if (!strncmp(name, "vmlinux-", 8)) name = "vmlinux"; else { pos = strchr(name,'.'); if (pos) { /* kernel module */ *pos = '\0'; pos = name; while ((pos = strchr(pos, '-'))) *pos++ = '_'; } } /* create strings, patches, and dynrelas sections */ kpatch_create_strings_elements(kelf_out); kpatch_create_patches_sections(kelf_out, lookup, hint, name); kpatch_create_dynamic_rela_sections(kelf_out, lookup, hint, name); kpatch_create_hooks_objname_rela(kelf_out, name); kpatch_build_strings_section_data(kelf_out); kpatch_create_mcount_sections(kelf_out); /* * At this point, the set of output sections and symbols is * finalized. Reorder the symbols into linker-compliant * order and index all the symbols and sections. After the * indexes have been established, update index data * throughout the structure. */ kpatch_reorder_symbols(kelf_out); kpatch_strip_unneeded_syms(kelf_out, lookup); kpatch_reindex_elements(kelf_out); /* * Update rela section headers and rebuild the rela section data * buffers from the relas lists. */ symtab = find_section_by_name(&kelf_out->sections, ".symtab"); list_for_each_entry(sec, &kelf_out->sections, list) { if (!is_rela_section(sec)) continue; sec->sh.sh_link = symtab->index; sec->sh.sh_info = sec->base->index; kpatch_rebuild_rela_section_data(sec); } kpatch_create_shstrtab(kelf_out); kpatch_create_strtab(kelf_out); kpatch_create_symtab(kelf_out); kpatch_dump_kelf(kelf_out); kpatch_write_output_elf(kelf_out, kelf_patched->elf, arguments.args[3]); kpatch_elf_free(kelf_patched); kpatch_elf_teardown(kelf_out); kpatch_elf_free(kelf_out); return 0; } kpatch-0.3.2/kpatch-build/insn/000077500000000000000000000000001266116401600163075ustar00rootroot00000000000000kpatch-0.3.2/kpatch-build/insn/asm/000077500000000000000000000000001266116401600170675ustar00rootroot00000000000000kpatch-0.3.2/kpatch-build/insn/asm/inat.h000066400000000000000000000137641266116401600202060ustar00rootroot00000000000000#ifndef _ASM_X86_INAT_H #define _ASM_X86_INAT_H /* * x86 instruction attributes * * Written by Masami Hiramatsu * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #include /* * Internal bits. Don't use bitmasks directly, because these bits are * unstable. You should use checking functions. */ #define INAT_OPCODE_TABLE_SIZE 256 #define INAT_GROUP_TABLE_SIZE 8 /* Legacy last prefixes */ #define INAT_PFX_OPNDSZ 1 /* 0x66 */ /* LPFX1 */ #define INAT_PFX_REPE 2 /* 0xF3 */ /* LPFX2 */ #define INAT_PFX_REPNE 3 /* 0xF2 */ /* LPFX3 */ /* Other Legacy prefixes */ #define INAT_PFX_LOCK 4 /* 0xF0 */ #define INAT_PFX_CS 5 /* 0x2E */ #define INAT_PFX_DS 6 /* 0x3E */ #define INAT_PFX_ES 7 /* 0x26 */ #define INAT_PFX_FS 8 /* 0x64 */ #define INAT_PFX_GS 9 /* 0x65 */ #define INAT_PFX_SS 10 /* 0x36 */ #define INAT_PFX_ADDRSZ 11 /* 0x67 */ /* x86-64 REX prefix */ #define INAT_PFX_REX 12 /* 0x4X */ /* AVX VEX prefixes */ #define INAT_PFX_VEX2 13 /* 2-bytes VEX prefix */ #define INAT_PFX_VEX3 14 /* 3-bytes VEX prefix */ #define INAT_LSTPFX_MAX 3 #define INAT_LGCPFX_MAX 11 /* Immediate size */ #define INAT_IMM_BYTE 1 #define INAT_IMM_WORD 2 #define INAT_IMM_DWORD 3 #define INAT_IMM_QWORD 4 #define INAT_IMM_PTR 5 #define INAT_IMM_VWORD32 6 #define INAT_IMM_VWORD 7 /* Legacy prefix */ #define INAT_PFX_OFFS 0 #define INAT_PFX_BITS 4 #define INAT_PFX_MAX ((1 << INAT_PFX_BITS) - 1) #define INAT_PFX_MASK (INAT_PFX_MAX << INAT_PFX_OFFS) /* Escape opcodes */ #define INAT_ESC_OFFS (INAT_PFX_OFFS + INAT_PFX_BITS) #define INAT_ESC_BITS 2 #define INAT_ESC_MAX ((1 << INAT_ESC_BITS) - 1) #define INAT_ESC_MASK (INAT_ESC_MAX << INAT_ESC_OFFS) /* Group opcodes (1-16) */ #define INAT_GRP_OFFS (INAT_ESC_OFFS + INAT_ESC_BITS) #define INAT_GRP_BITS 5 #define INAT_GRP_MAX ((1 << INAT_GRP_BITS) - 1) #define INAT_GRP_MASK (INAT_GRP_MAX << INAT_GRP_OFFS) /* Immediates */ #define INAT_IMM_OFFS (INAT_GRP_OFFS + INAT_GRP_BITS) #define INAT_IMM_BITS 3 #define INAT_IMM_MASK (((1 << INAT_IMM_BITS) - 1) << INAT_IMM_OFFS) /* Flags */ #define INAT_FLAG_OFFS (INAT_IMM_OFFS + INAT_IMM_BITS) #define INAT_MODRM (1 << (INAT_FLAG_OFFS)) #define INAT_FORCE64 (1 << (INAT_FLAG_OFFS + 1)) #define INAT_SCNDIMM (1 << (INAT_FLAG_OFFS + 2)) #define INAT_MOFFSET (1 << (INAT_FLAG_OFFS + 3)) #define INAT_VARIANT (1 << (INAT_FLAG_OFFS + 4)) #define INAT_VEXOK (1 << (INAT_FLAG_OFFS + 5)) #define INAT_VEXONLY (1 << (INAT_FLAG_OFFS + 6)) /* Attribute making macros for attribute tables */ #define INAT_MAKE_PREFIX(pfx) (pfx << INAT_PFX_OFFS) #define INAT_MAKE_ESCAPE(esc) (esc << INAT_ESC_OFFS) #define INAT_MAKE_GROUP(grp) ((grp << INAT_GRP_OFFS) | INAT_MODRM) #define INAT_MAKE_IMM(imm) (imm << INAT_IMM_OFFS) /* Attribute search APIs */ extern insn_attr_t inat_get_opcode_attribute(insn_byte_t opcode); extern int inat_get_last_prefix_id(insn_byte_t last_pfx); extern insn_attr_t inat_get_escape_attribute(insn_byte_t opcode, int lpfx_id, insn_attr_t esc_attr); extern insn_attr_t inat_get_group_attribute(insn_byte_t modrm, int lpfx_id, insn_attr_t esc_attr); extern insn_attr_t inat_get_avx_attribute(insn_byte_t opcode, insn_byte_t vex_m, insn_byte_t vex_pp); /* Attribute checking functions */ static inline int inat_is_legacy_prefix(insn_attr_t attr) { attr &= INAT_PFX_MASK; return attr && attr <= INAT_LGCPFX_MAX; } static inline int inat_is_address_size_prefix(insn_attr_t attr) { return (attr & INAT_PFX_MASK) == INAT_PFX_ADDRSZ; } static inline int inat_is_operand_size_prefix(insn_attr_t attr) { return (attr & INAT_PFX_MASK) == INAT_PFX_OPNDSZ; } static inline int inat_is_rex_prefix(insn_attr_t attr) { return (attr & INAT_PFX_MASK) == INAT_PFX_REX; } static inline int inat_last_prefix_id(insn_attr_t attr) { if ((attr & INAT_PFX_MASK) > INAT_LSTPFX_MAX) return 0; else return attr & INAT_PFX_MASK; } static inline int inat_is_vex_prefix(insn_attr_t attr) { attr &= INAT_PFX_MASK; return attr == INAT_PFX_VEX2 || attr == INAT_PFX_VEX3; } static inline int inat_is_vex3_prefix(insn_attr_t attr) { return (attr & INAT_PFX_MASK) == INAT_PFX_VEX3; } static inline int inat_is_escape(insn_attr_t attr) { return attr & INAT_ESC_MASK; } static inline int inat_escape_id(insn_attr_t attr) { return (attr & INAT_ESC_MASK) >> INAT_ESC_OFFS; } static inline int inat_is_group(insn_attr_t attr) { return attr & INAT_GRP_MASK; } static inline int inat_group_id(insn_attr_t attr) { return (attr & INAT_GRP_MASK) >> INAT_GRP_OFFS; } static inline int inat_group_common_attribute(insn_attr_t attr) { return attr & ~INAT_GRP_MASK; } static inline int inat_has_immediate(insn_attr_t attr) { return attr & INAT_IMM_MASK; } static inline int inat_immediate_size(insn_attr_t attr) { return (attr & INAT_IMM_MASK) >> INAT_IMM_OFFS; } static inline int inat_has_modrm(insn_attr_t attr) { return attr & INAT_MODRM; } static inline int inat_is_force64(insn_attr_t attr) { return attr & INAT_FORCE64; } static inline int inat_has_second_immediate(insn_attr_t attr) { return attr & INAT_SCNDIMM; } static inline int inat_has_moffset(insn_attr_t attr) { return attr & INAT_MOFFSET; } static inline int inat_has_variant(insn_attr_t attr) { return attr & INAT_VARIANT; } static inline int inat_accept_vex(insn_attr_t attr) { return attr & INAT_VEXOK; } static inline int inat_must_vex(insn_attr_t attr) { return attr & INAT_VEXONLY; } #endif kpatch-0.3.2/kpatch-build/insn/asm/inat_types.h000066400000000000000000000017651266116401600214300ustar00rootroot00000000000000#ifndef _ASM_X86_INAT_TYPES_H #define _ASM_X86_INAT_TYPES_H /* * x86 instruction attributes * * Written by Masami Hiramatsu * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ /* Instruction attributes */ typedef unsigned int insn_attr_t; typedef unsigned char insn_byte_t; typedef signed int insn_value_t; #endif kpatch-0.3.2/kpatch-build/insn/asm/insn.h000066400000000000000000000135371266116401600202200ustar00rootroot00000000000000#ifndef _ASM_X86_INSN_H #define _ASM_X86_INSN_H /* * x86 instruction analysis * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * * Copyright (C) IBM Corporation, 2009 */ /* insn_attr_t is defined in inat.h */ #include struct insn_field { union { insn_value_t value; insn_byte_t bytes[4]; }; /* !0 if we've run insn_get_xxx() for this field */ unsigned char got; unsigned char nbytes; }; struct insn { struct insn_field prefixes; /* * Prefixes * prefixes.bytes[3]: last prefix */ struct insn_field rex_prefix; /* REX prefix */ struct insn_field vex_prefix; /* VEX prefix */ struct insn_field opcode; /* * opcode.bytes[0]: opcode1 * opcode.bytes[1]: opcode2 * opcode.bytes[2]: opcode3 */ struct insn_field modrm; struct insn_field sib; struct insn_field displacement; union { struct insn_field immediate; struct insn_field moffset1; /* for 64bit MOV */ struct insn_field immediate1; /* for 64bit imm or off16/32 */ }; union { struct insn_field moffset2; /* for 64bit MOV */ struct insn_field immediate2; /* for 64bit imm or seg16 */ }; insn_attr_t attr; unsigned char opnd_bytes; unsigned char addr_bytes; unsigned char length; unsigned char x86_64; const insn_byte_t *kaddr; /* kernel address of insn to analyze */ const insn_byte_t *next_byte; }; #define MAX_INSN_SIZE 16 #define X86_MODRM_MOD(modrm) (((modrm) & 0xc0) >> 6) #define X86_MODRM_REG(modrm) (((modrm) & 0x38) >> 3) #define X86_MODRM_RM(modrm) ((modrm) & 0x07) #define X86_SIB_SCALE(sib) (((sib) & 0xc0) >> 6) #define X86_SIB_INDEX(sib) (((sib) & 0x38) >> 3) #define X86_SIB_BASE(sib) ((sib) & 0x07) #define X86_REX_W(rex) ((rex) & 8) #define X86_REX_R(rex) ((rex) & 4) #define X86_REX_X(rex) ((rex) & 2) #define X86_REX_B(rex) ((rex) & 1) /* VEX bit flags */ #define X86_VEX_W(vex) ((vex) & 0x80) /* VEX3 Byte2 */ #define X86_VEX_R(vex) ((vex) & 0x80) /* VEX2/3 Byte1 */ #define X86_VEX_X(vex) ((vex) & 0x40) /* VEX3 Byte1 */ #define X86_VEX_B(vex) ((vex) & 0x20) /* VEX3 Byte1 */ #define X86_VEX_L(vex) ((vex) & 0x04) /* VEX3 Byte2, VEX2 Byte1 */ /* VEX bit fields */ #define X86_VEX3_M(vex) ((vex) & 0x1f) /* VEX3 Byte1 */ #define X86_VEX2_M 1 /* VEX2.M always 1 */ #define X86_VEX_V(vex) (((vex) & 0x78) >> 3) /* VEX3 Byte2, VEX2 Byte1 */ #define X86_VEX_P(vex) ((vex) & 0x03) /* VEX3 Byte2, VEX2 Byte1 */ #define X86_VEX_M_MAX 0x1f /* VEX3.M Maximum value */ extern void insn_init(struct insn *insn, const void *kaddr, int x86_64); extern void insn_get_prefixes(struct insn *insn); extern void insn_get_opcode(struct insn *insn); extern void insn_get_modrm(struct insn *insn); extern void insn_get_sib(struct insn *insn); extern void insn_get_displacement(struct insn *insn); extern void insn_get_immediate(struct insn *insn); extern void insn_get_length(struct insn *insn); /* Attribute will be determined after getting ModRM (for opcode groups) */ static inline void insn_get_attribute(struct insn *insn) { insn_get_modrm(insn); } /* Instruction uses RIP-relative addressing */ extern int insn_rip_relative(struct insn *insn); /* Init insn for kernel text */ static inline void kernel_insn_init(struct insn *insn, const void *kaddr) { #ifdef CONFIG_X86_64 insn_init(insn, kaddr, 1); #else /* CONFIG_X86_32 */ insn_init(insn, kaddr, 0); #endif } static inline int insn_is_avx(struct insn *insn) { if (!insn->prefixes.got) insn_get_prefixes(insn); return (insn->vex_prefix.value != 0); } /* Ensure this instruction is decoded completely */ static inline int insn_complete(struct insn *insn) { return insn->opcode.got && insn->modrm.got && insn->sib.got && insn->displacement.got && insn->immediate.got; } static inline insn_byte_t insn_vex_m_bits(struct insn *insn) { if (insn->vex_prefix.nbytes == 2) /* 2 bytes VEX */ return X86_VEX2_M; else return X86_VEX3_M(insn->vex_prefix.bytes[1]); } static inline insn_byte_t insn_vex_p_bits(struct insn *insn) { if (insn->vex_prefix.nbytes == 2) /* 2 bytes VEX */ return X86_VEX_P(insn->vex_prefix.bytes[1]); else return X86_VEX_P(insn->vex_prefix.bytes[2]); } /* Get the last prefix id from last prefix or VEX prefix */ static inline int insn_last_prefix_id(struct insn *insn) { if (insn_is_avx(insn)) return insn_vex_p_bits(insn); /* VEX_p is a SIMD prefix id */ if (insn->prefixes.bytes[3]) return inat_get_last_prefix_id(insn->prefixes.bytes[3]); return 0; } /* Offset of each field from kaddr */ static inline int insn_offset_rex_prefix(struct insn *insn) { return insn->prefixes.nbytes; } static inline int insn_offset_vex_prefix(struct insn *insn) { return insn_offset_rex_prefix(insn) + insn->rex_prefix.nbytes; } static inline int insn_offset_opcode(struct insn *insn) { return insn_offset_vex_prefix(insn) + insn->vex_prefix.nbytes; } static inline int insn_offset_modrm(struct insn *insn) { return insn_offset_opcode(insn) + insn->opcode.nbytes; } static inline int insn_offset_sib(struct insn *insn) { return insn_offset_modrm(insn) + insn->modrm.nbytes; } static inline int insn_offset_displacement(struct insn *insn) { return insn_offset_sib(insn) + insn->sib.nbytes; } static inline int insn_offset_immediate(struct insn *insn) { return insn_offset_displacement(insn) + insn->displacement.nbytes; } #endif /* _ASM_X86_INSN_H */ kpatch-0.3.2/kpatch-build/insn/inat-tables.c000066400000000000000000001172231266116401600206640ustar00rootroot00000000000000/* x86 opcode map generated from x86-opcode-map.txt */ /* Do not change this code. */ /* Table: one byte opcode */ const insn_attr_t inat_primary_table[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MODRM, [0x01] = INAT_MODRM, [0x02] = INAT_MODRM, [0x03] = INAT_MODRM, [0x04] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x05] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x08] = INAT_MODRM, [0x09] = INAT_MODRM, [0x0a] = INAT_MODRM, [0x0b] = INAT_MODRM, [0x0c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x0d] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x0f] = INAT_MAKE_ESCAPE(1), [0x10] = INAT_MODRM, [0x11] = INAT_MODRM, [0x12] = INAT_MODRM, [0x13] = INAT_MODRM, [0x14] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x15] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x18] = INAT_MODRM, [0x19] = INAT_MODRM, [0x1a] = INAT_MODRM, [0x1b] = INAT_MODRM, [0x1c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x1d] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x20] = INAT_MODRM, [0x21] = INAT_MODRM, [0x22] = INAT_MODRM, [0x23] = INAT_MODRM, [0x24] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x25] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x26] = INAT_MAKE_PREFIX(INAT_PFX_ES), [0x28] = INAT_MODRM, [0x29] = INAT_MODRM, [0x2a] = INAT_MODRM, [0x2b] = INAT_MODRM, [0x2c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x2d] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x2e] = INAT_MAKE_PREFIX(INAT_PFX_CS), [0x30] = INAT_MODRM, [0x31] = INAT_MODRM, [0x32] = INAT_MODRM, [0x33] = INAT_MODRM, [0x34] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x35] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x36] = INAT_MAKE_PREFIX(INAT_PFX_SS), [0x38] = INAT_MODRM, [0x39] = INAT_MODRM, [0x3a] = INAT_MODRM, [0x3b] = INAT_MODRM, [0x3c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x3d] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0x3e] = INAT_MAKE_PREFIX(INAT_PFX_DS), [0x40] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x41] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x42] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x43] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x44] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x45] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x46] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x47] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x48] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x49] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4a] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4b] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4c] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4d] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4e] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x4f] = INAT_MAKE_PREFIX(INAT_PFX_REX), [0x50] = INAT_FORCE64, [0x51] = INAT_FORCE64, [0x52] = INAT_FORCE64, [0x53] = INAT_FORCE64, [0x54] = INAT_FORCE64, [0x55] = INAT_FORCE64, [0x56] = INAT_FORCE64, [0x57] = INAT_FORCE64, [0x58] = INAT_FORCE64, [0x59] = INAT_FORCE64, [0x5a] = INAT_FORCE64, [0x5b] = INAT_FORCE64, [0x5c] = INAT_FORCE64, [0x5d] = INAT_FORCE64, [0x5e] = INAT_FORCE64, [0x5f] = INAT_FORCE64, [0x62] = INAT_MODRM, [0x63] = INAT_MODRM | INAT_MODRM, [0x64] = INAT_MAKE_PREFIX(INAT_PFX_FS), [0x65] = INAT_MAKE_PREFIX(INAT_PFX_GS), [0x66] = INAT_MAKE_PREFIX(INAT_PFX_OPNDSZ), [0x67] = INAT_MAKE_PREFIX(INAT_PFX_ADDRSZ), [0x68] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x69] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM, [0x6a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0x6b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x71] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x72] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x73] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x74] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x75] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x76] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x77] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x78] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x79] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7a] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7b] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7c] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7d] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7e] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x7f] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0x80] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1), [0x81] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM | INAT_MAKE_GROUP(1), [0x82] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1), [0x83] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(1), [0x84] = INAT_MODRM, [0x85] = INAT_MODRM, [0x86] = INAT_MODRM, [0x87] = INAT_MODRM, [0x88] = INAT_MODRM, [0x89] = INAT_MODRM, [0x8a] = INAT_MODRM, [0x8b] = INAT_MODRM, [0x8c] = INAT_MODRM, [0x8d] = INAT_MODRM, [0x8e] = INAT_MODRM, [0x8f] = INAT_MAKE_GROUP(2) | INAT_MODRM | INAT_FORCE64, [0x9a] = INAT_MAKE_IMM(INAT_IMM_PTR), [0x9c] = INAT_FORCE64, [0x9d] = INAT_FORCE64, [0xa0] = INAT_MOFFSET, [0xa1] = INAT_MOFFSET, [0xa2] = INAT_MOFFSET, [0xa3] = INAT_MOFFSET, [0xa8] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xa9] = INAT_MAKE_IMM(INAT_IMM_VWORD32), [0xb0] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb1] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb2] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb3] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb4] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb5] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb6] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb7] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xb8] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xb9] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xba] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbb] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbc] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbd] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbe] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xbf] = INAT_MAKE_IMM(INAT_IMM_VWORD), [0xc0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(3), [0xc1] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(3), [0xc2] = INAT_MAKE_IMM(INAT_IMM_WORD) | INAT_FORCE64, [0xc4] = INAT_MODRM | INAT_MAKE_PREFIX(INAT_PFX_VEX3), [0xc5] = INAT_MODRM | INAT_MAKE_PREFIX(INAT_PFX_VEX2), [0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(4), [0xc7] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM | INAT_MAKE_GROUP(5), [0xc8] = INAT_MAKE_IMM(INAT_IMM_WORD) | INAT_SCNDIMM, [0xc9] = INAT_FORCE64, [0xca] = INAT_MAKE_IMM(INAT_IMM_WORD), [0xcd] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xd0] = INAT_MODRM | INAT_MAKE_GROUP(3), [0xd1] = INAT_MODRM | INAT_MAKE_GROUP(3), [0xd2] = INAT_MODRM | INAT_MAKE_GROUP(3), [0xd3] = INAT_MODRM | INAT_MAKE_GROUP(3), [0xd4] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xd5] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xd8] = INAT_MODRM, [0xd9] = INAT_MODRM, [0xda] = INAT_MODRM, [0xdb] = INAT_MODRM, [0xdc] = INAT_MODRM, [0xdd] = INAT_MODRM, [0xde] = INAT_MODRM, [0xdf] = INAT_MODRM, [0xe0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xe1] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xe2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xe3] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xe4] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xe5] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xe6] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xe7] = INAT_MAKE_IMM(INAT_IMM_BYTE), [0xe8] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0xe9] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0xea] = INAT_MAKE_IMM(INAT_IMM_PTR), [0xeb] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_FORCE64, [0xf0] = INAT_MAKE_PREFIX(INAT_PFX_LOCK), [0xf2] = INAT_MAKE_PREFIX(INAT_PFX_REPNE) | INAT_MAKE_PREFIX(INAT_PFX_REPNE), [0xf3] = INAT_MAKE_PREFIX(INAT_PFX_REPE) | INAT_MAKE_PREFIX(INAT_PFX_REPE), [0xf6] = INAT_MODRM | INAT_MAKE_GROUP(6), [0xf7] = INAT_MODRM | INAT_MAKE_GROUP(7), [0xfe] = INAT_MAKE_GROUP(8), [0xff] = INAT_MAKE_GROUP(9), }; /* Table: 2-byte opcode (0x0f) */ const insn_attr_t inat_escape_table_1[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MAKE_GROUP(10), [0x01] = INAT_MAKE_GROUP(11), [0x02] = INAT_MODRM, [0x03] = INAT_MODRM, [0x0d] = INAT_MAKE_GROUP(12), [0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0x10] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x11] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x12] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x13] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x14] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x15] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x16] = INAT_MODRM | INAT_VEXOK | INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x17] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x18] = INAT_MAKE_GROUP(13), [0x1a] = INAT_MODRM | INAT_MODRM | INAT_MODRM | INAT_MODRM, [0x1b] = INAT_MODRM | INAT_MODRM | INAT_MODRM | INAT_MODRM, [0x1f] = INAT_MODRM, [0x20] = INAT_MODRM, [0x21] = INAT_MODRM, [0x22] = INAT_MODRM, [0x23] = INAT_MODRM, [0x28] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x29] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x2a] = INAT_MODRM | INAT_VARIANT, [0x2b] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x2c] = INAT_MODRM | INAT_VARIANT, [0x2d] = INAT_MODRM | INAT_VARIANT, [0x2e] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x2f] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x38] = INAT_MAKE_ESCAPE(2), [0x3a] = INAT_MAKE_ESCAPE(3), [0x40] = INAT_MODRM, [0x41] = INAT_MODRM, [0x42] = INAT_MODRM, [0x43] = INAT_MODRM, [0x44] = INAT_MODRM, [0x45] = INAT_MODRM, [0x46] = INAT_MODRM, [0x47] = INAT_MODRM, [0x48] = INAT_MODRM, [0x49] = INAT_MODRM, [0x4a] = INAT_MODRM, [0x4b] = INAT_MODRM, [0x4c] = INAT_MODRM, [0x4d] = INAT_MODRM, [0x4e] = INAT_MODRM, [0x4f] = INAT_MODRM, [0x50] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x51] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x52] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x53] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x54] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x55] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x56] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x57] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x58] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x59] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5a] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5b] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5c] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5d] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5e] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x5f] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x60] = INAT_MODRM | INAT_VARIANT, [0x61] = INAT_MODRM | INAT_VARIANT, [0x62] = INAT_MODRM | INAT_VARIANT, [0x63] = INAT_MODRM | INAT_VARIANT, [0x64] = INAT_MODRM | INAT_VARIANT, [0x65] = INAT_MODRM | INAT_VARIANT, [0x66] = INAT_MODRM | INAT_VARIANT, [0x67] = INAT_MODRM | INAT_VARIANT, [0x68] = INAT_MODRM | INAT_VARIANT, [0x69] = INAT_MODRM | INAT_VARIANT, [0x6a] = INAT_MODRM | INAT_VARIANT, [0x6b] = INAT_MODRM | INAT_VARIANT, [0x6c] = INAT_VARIANT, [0x6d] = INAT_VARIANT, [0x6e] = INAT_MODRM | INAT_VARIANT, [0x6f] = INAT_MODRM | INAT_VARIANT, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x71] = INAT_MAKE_GROUP(14), [0x72] = INAT_MAKE_GROUP(15), [0x73] = INAT_MAKE_GROUP(16), [0x74] = INAT_MODRM | INAT_VARIANT, [0x75] = INAT_MODRM | INAT_VARIANT, [0x76] = INAT_MODRM | INAT_VARIANT, [0x77] = INAT_VEXOK | INAT_VEXOK, [0x78] = INAT_MODRM, [0x79] = INAT_MODRM, [0x7c] = INAT_VARIANT, [0x7d] = INAT_VARIANT, [0x7e] = INAT_MODRM | INAT_VARIANT, [0x7f] = INAT_MODRM | INAT_VARIANT, [0x80] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x81] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x82] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x83] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x84] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x85] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x86] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x87] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x88] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x89] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8a] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8b] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8c] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8d] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8e] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x8f] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_FORCE64, [0x90] = INAT_MODRM, [0x91] = INAT_MODRM, [0x92] = INAT_MODRM, [0x93] = INAT_MODRM, [0x94] = INAT_MODRM, [0x95] = INAT_MODRM, [0x96] = INAT_MODRM, [0x97] = INAT_MODRM, [0x98] = INAT_MODRM, [0x99] = INAT_MODRM, [0x9a] = INAT_MODRM, [0x9b] = INAT_MODRM, [0x9c] = INAT_MODRM, [0x9d] = INAT_MODRM, [0x9e] = INAT_MODRM, [0x9f] = INAT_MODRM, [0xa0] = INAT_FORCE64, [0xa1] = INAT_FORCE64, [0xa3] = INAT_MODRM, [0xa4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0xa5] = INAT_MODRM, [0xa6] = INAT_MAKE_GROUP(17), [0xa7] = INAT_MAKE_GROUP(18), [0xa8] = INAT_FORCE64, [0xa9] = INAT_FORCE64, [0xab] = INAT_MODRM, [0xac] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0xad] = INAT_MODRM, [0xae] = INAT_MAKE_GROUP(19), [0xaf] = INAT_MODRM, [0xb0] = INAT_MODRM, [0xb1] = INAT_MODRM, [0xb2] = INAT_MODRM, [0xb3] = INAT_MODRM, [0xb4] = INAT_MODRM, [0xb5] = INAT_MODRM, [0xb6] = INAT_MODRM, [0xb7] = INAT_MODRM, [0xb8] = INAT_VARIANT, [0xb9] = INAT_MAKE_GROUP(20), [0xba] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_MAKE_GROUP(21), [0xbb] = INAT_MODRM, [0xbc] = INAT_MODRM | INAT_VARIANT, [0xbd] = INAT_MODRM | INAT_VARIANT, [0xbe] = INAT_MODRM, [0xbf] = INAT_MODRM, [0xc0] = INAT_MODRM, [0xc1] = INAT_MODRM, [0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0xc3] = INAT_MODRM, [0xc4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0xc5] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0xc7] = INAT_MAKE_GROUP(22), [0xd0] = INAT_VARIANT, [0xd1] = INAT_MODRM | INAT_VARIANT, [0xd2] = INAT_MODRM | INAT_VARIANT, [0xd3] = INAT_MODRM | INAT_VARIANT, [0xd4] = INAT_MODRM | INAT_VARIANT, [0xd5] = INAT_MODRM | INAT_VARIANT, [0xd6] = INAT_VARIANT, [0xd7] = INAT_MODRM | INAT_VARIANT, [0xd8] = INAT_MODRM | INAT_VARIANT, [0xd9] = INAT_MODRM | INAT_VARIANT, [0xda] = INAT_MODRM | INAT_VARIANT, [0xdb] = INAT_MODRM | INAT_VARIANT, [0xdc] = INAT_MODRM | INAT_VARIANT, [0xdd] = INAT_MODRM | INAT_VARIANT, [0xde] = INAT_MODRM | INAT_VARIANT, [0xdf] = INAT_MODRM | INAT_VARIANT, [0xe0] = INAT_MODRM | INAT_VARIANT, [0xe1] = INAT_MODRM | INAT_VARIANT, [0xe2] = INAT_MODRM | INAT_VARIANT, [0xe3] = INAT_MODRM | INAT_VARIANT, [0xe4] = INAT_MODRM | INAT_VARIANT, [0xe5] = INAT_MODRM | INAT_VARIANT, [0xe6] = INAT_VARIANT, [0xe7] = INAT_MODRM | INAT_VARIANT, [0xe8] = INAT_MODRM | INAT_VARIANT, [0xe9] = INAT_MODRM | INAT_VARIANT, [0xea] = INAT_MODRM | INAT_VARIANT, [0xeb] = INAT_MODRM | INAT_VARIANT, [0xec] = INAT_MODRM | INAT_VARIANT, [0xed] = INAT_MODRM | INAT_VARIANT, [0xee] = INAT_MODRM | INAT_VARIANT, [0xef] = INAT_MODRM | INAT_VARIANT, [0xf0] = INAT_VARIANT, [0xf1] = INAT_MODRM | INAT_VARIANT, [0xf2] = INAT_MODRM | INAT_VARIANT, [0xf3] = INAT_MODRM | INAT_VARIANT, [0xf4] = INAT_MODRM | INAT_VARIANT, [0xf5] = INAT_MODRM | INAT_VARIANT, [0xf6] = INAT_MODRM | INAT_VARIANT, [0xf7] = INAT_MODRM | INAT_VARIANT, [0xf8] = INAT_MODRM | INAT_VARIANT, [0xf9] = INAT_MODRM | INAT_VARIANT, [0xfa] = INAT_MODRM | INAT_VARIANT, [0xfb] = INAT_MODRM | INAT_VARIANT, [0xfc] = INAT_MODRM | INAT_VARIANT, [0xfd] = INAT_MODRM | INAT_VARIANT, [0xfe] = INAT_MODRM | INAT_VARIANT, }; const insn_attr_t inat_escape_table_1_1[INAT_OPCODE_TABLE_SIZE] = { [0x10] = INAT_MODRM | INAT_VEXOK, [0x11] = INAT_MODRM | INAT_VEXOK, [0x12] = INAT_MODRM | INAT_VEXOK, [0x13] = INAT_MODRM | INAT_VEXOK, [0x14] = INAT_MODRM | INAT_VEXOK, [0x15] = INAT_MODRM | INAT_VEXOK, [0x16] = INAT_MODRM | INAT_VEXOK, [0x17] = INAT_MODRM | INAT_VEXOK, [0x28] = INAT_MODRM | INAT_VEXOK, [0x29] = INAT_MODRM | INAT_VEXOK, [0x2a] = INAT_MODRM, [0x2b] = INAT_MODRM | INAT_VEXOK, [0x2c] = INAT_MODRM, [0x2d] = INAT_MODRM, [0x2e] = INAT_MODRM | INAT_VEXOK, [0x2f] = INAT_MODRM | INAT_VEXOK, [0x50] = INAT_MODRM | INAT_VEXOK, [0x51] = INAT_MODRM | INAT_VEXOK, [0x54] = INAT_MODRM | INAT_VEXOK, [0x55] = INAT_MODRM | INAT_VEXOK, [0x56] = INAT_MODRM | INAT_VEXOK, [0x57] = INAT_MODRM | INAT_VEXOK, [0x58] = INAT_MODRM | INAT_VEXOK, [0x59] = INAT_MODRM | INAT_VEXOK, [0x5a] = INAT_MODRM | INAT_VEXOK, [0x5b] = INAT_MODRM | INAT_VEXOK, [0x5c] = INAT_MODRM | INAT_VEXOK, [0x5d] = INAT_MODRM | INAT_VEXOK, [0x5e] = INAT_MODRM | INAT_VEXOK, [0x5f] = INAT_MODRM | INAT_VEXOK, [0x60] = INAT_MODRM | INAT_VEXOK, [0x61] = INAT_MODRM | INAT_VEXOK, [0x62] = INAT_MODRM | INAT_VEXOK, [0x63] = INAT_MODRM | INAT_VEXOK, [0x64] = INAT_MODRM | INAT_VEXOK, [0x65] = INAT_MODRM | INAT_VEXOK, [0x66] = INAT_MODRM | INAT_VEXOK, [0x67] = INAT_MODRM | INAT_VEXOK, [0x68] = INAT_MODRM | INAT_VEXOK, [0x69] = INAT_MODRM | INAT_VEXOK, [0x6a] = INAT_MODRM | INAT_VEXOK, [0x6b] = INAT_MODRM | INAT_VEXOK, [0x6c] = INAT_MODRM | INAT_VEXOK, [0x6d] = INAT_MODRM | INAT_VEXOK, [0x6e] = INAT_MODRM | INAT_VEXOK, [0x6f] = INAT_MODRM | INAT_VEXOK, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x74] = INAT_MODRM | INAT_VEXOK, [0x75] = INAT_MODRM | INAT_VEXOK, [0x76] = INAT_MODRM | INAT_VEXOK, [0x7c] = INAT_MODRM | INAT_VEXOK, [0x7d] = INAT_MODRM | INAT_VEXOK, [0x7e] = INAT_MODRM | INAT_VEXOK, [0x7f] = INAT_MODRM | INAT_VEXOK, [0xbc] = INAT_MODRM, [0xbd] = INAT_MODRM, [0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xc4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xc5] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xc6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xd0] = INAT_MODRM | INAT_VEXOK, [0xd1] = INAT_MODRM | INAT_VEXOK, [0xd2] = INAT_MODRM | INAT_VEXOK, [0xd3] = INAT_MODRM | INAT_VEXOK, [0xd4] = INAT_MODRM | INAT_VEXOK, [0xd5] = INAT_MODRM | INAT_VEXOK, [0xd6] = INAT_MODRM | INAT_VEXOK, [0xd7] = INAT_MODRM | INAT_VEXOK, [0xd8] = INAT_MODRM | INAT_VEXOK, [0xd9] = INAT_MODRM | INAT_VEXOK, [0xda] = INAT_MODRM | INAT_VEXOK, [0xdb] = INAT_MODRM | INAT_VEXOK, [0xdc] = INAT_MODRM | INAT_VEXOK, [0xdd] = INAT_MODRM | INAT_VEXOK, [0xde] = INAT_MODRM | INAT_VEXOK, [0xdf] = INAT_MODRM | INAT_VEXOK, [0xe0] = INAT_MODRM | INAT_VEXOK, [0xe1] = INAT_MODRM | INAT_VEXOK, [0xe2] = INAT_MODRM | INAT_VEXOK, [0xe3] = INAT_MODRM | INAT_VEXOK, [0xe4] = INAT_MODRM | INAT_VEXOK, [0xe5] = INAT_MODRM | INAT_VEXOK, [0xe6] = INAT_MODRM | INAT_VEXOK, [0xe7] = INAT_MODRM | INAT_VEXOK, [0xe8] = INAT_MODRM | INAT_VEXOK, [0xe9] = INAT_MODRM | INAT_VEXOK, [0xea] = INAT_MODRM | INAT_VEXOK, [0xeb] = INAT_MODRM | INAT_VEXOK, [0xec] = INAT_MODRM | INAT_VEXOK, [0xed] = INAT_MODRM | INAT_VEXOK, [0xee] = INAT_MODRM | INAT_VEXOK, [0xef] = INAT_MODRM | INAT_VEXOK, [0xf1] = INAT_MODRM | INAT_VEXOK, [0xf2] = INAT_MODRM | INAT_VEXOK, [0xf3] = INAT_MODRM | INAT_VEXOK, [0xf4] = INAT_MODRM | INAT_VEXOK, [0xf5] = INAT_MODRM | INAT_VEXOK, [0xf6] = INAT_MODRM | INAT_VEXOK, [0xf7] = INAT_MODRM | INAT_VEXOK, [0xf8] = INAT_MODRM | INAT_VEXOK, [0xf9] = INAT_MODRM | INAT_VEXOK, [0xfa] = INAT_MODRM | INAT_VEXOK, [0xfb] = INAT_MODRM | INAT_VEXOK, [0xfc] = INAT_MODRM | INAT_VEXOK, [0xfd] = INAT_MODRM | INAT_VEXOK, [0xfe] = INAT_MODRM | INAT_VEXOK, }; const insn_attr_t inat_escape_table_1_2[INAT_OPCODE_TABLE_SIZE] = { [0x10] = INAT_MODRM | INAT_VEXOK, [0x11] = INAT_MODRM | INAT_VEXOK, [0x12] = INAT_MODRM | INAT_VEXOK, [0x16] = INAT_MODRM | INAT_VEXOK, [0x2a] = INAT_MODRM | INAT_VEXOK, [0x2c] = INAT_MODRM | INAT_VEXOK, [0x2d] = INAT_MODRM | INAT_VEXOK, [0x51] = INAT_MODRM | INAT_VEXOK, [0x52] = INAT_MODRM | INAT_VEXOK, [0x53] = INAT_MODRM | INAT_VEXOK, [0x58] = INAT_MODRM | INAT_VEXOK, [0x59] = INAT_MODRM | INAT_VEXOK, [0x5a] = INAT_MODRM | INAT_VEXOK, [0x5b] = INAT_MODRM | INAT_VEXOK, [0x5c] = INAT_MODRM | INAT_VEXOK, [0x5d] = INAT_MODRM | INAT_VEXOK, [0x5e] = INAT_MODRM | INAT_VEXOK, [0x5f] = INAT_MODRM | INAT_VEXOK, [0x6f] = INAT_MODRM | INAT_VEXOK, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x7e] = INAT_MODRM | INAT_VEXOK, [0x7f] = INAT_MODRM | INAT_VEXOK, [0xb8] = INAT_MODRM, [0xbc] = INAT_MODRM, [0xbd] = INAT_MODRM, [0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xd6] = INAT_MODRM, [0xe6] = INAT_MODRM | INAT_VEXOK, }; const insn_attr_t inat_escape_table_1_3[INAT_OPCODE_TABLE_SIZE] = { [0x10] = INAT_MODRM | INAT_VEXOK, [0x11] = INAT_MODRM | INAT_VEXOK, [0x12] = INAT_MODRM | INAT_VEXOK, [0x2a] = INAT_MODRM | INAT_VEXOK, [0x2c] = INAT_MODRM | INAT_VEXOK, [0x2d] = INAT_MODRM | INAT_VEXOK, [0x51] = INAT_MODRM | INAT_VEXOK, [0x58] = INAT_MODRM | INAT_VEXOK, [0x59] = INAT_MODRM | INAT_VEXOK, [0x5a] = INAT_MODRM | INAT_VEXOK, [0x5c] = INAT_MODRM | INAT_VEXOK, [0x5d] = INAT_MODRM | INAT_VEXOK, [0x5e] = INAT_MODRM | INAT_VEXOK, [0x5f] = INAT_MODRM | INAT_VEXOK, [0x70] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x7c] = INAT_MODRM | INAT_VEXOK, [0x7d] = INAT_MODRM | INAT_VEXOK, [0xbc] = INAT_MODRM, [0xbd] = INAT_MODRM, [0xc2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xd0] = INAT_MODRM | INAT_VEXOK, [0xd6] = INAT_MODRM, [0xe6] = INAT_MODRM | INAT_VEXOK, [0xf0] = INAT_MODRM | INAT_VEXOK, }; /* Table: 3-byte opcode 1 (0x0f 0x38) */ const insn_attr_t inat_escape_table_2[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MODRM | INAT_VARIANT, [0x01] = INAT_MODRM | INAT_VARIANT, [0x02] = INAT_MODRM | INAT_VARIANT, [0x03] = INAT_MODRM | INAT_VARIANT, [0x04] = INAT_MODRM | INAT_VARIANT, [0x05] = INAT_MODRM | INAT_VARIANT, [0x06] = INAT_MODRM | INAT_VARIANT, [0x07] = INAT_MODRM | INAT_VARIANT, [0x08] = INAT_MODRM | INAT_VARIANT, [0x09] = INAT_MODRM | INAT_VARIANT, [0x0a] = INAT_MODRM | INAT_VARIANT, [0x0b] = INAT_MODRM | INAT_VARIANT, [0x0c] = INAT_VARIANT, [0x0d] = INAT_VARIANT, [0x0e] = INAT_VARIANT, [0x0f] = INAT_VARIANT, [0x10] = INAT_VARIANT, [0x13] = INAT_VARIANT, [0x14] = INAT_VARIANT, [0x15] = INAT_VARIANT, [0x16] = INAT_VARIANT, [0x17] = INAT_VARIANT, [0x18] = INAT_VARIANT, [0x19] = INAT_VARIANT, [0x1a] = INAT_VARIANT, [0x1c] = INAT_MODRM | INAT_VARIANT, [0x1d] = INAT_MODRM | INAT_VARIANT, [0x1e] = INAT_MODRM | INAT_VARIANT, [0x20] = INAT_VARIANT, [0x21] = INAT_VARIANT, [0x22] = INAT_VARIANT, [0x23] = INAT_VARIANT, [0x24] = INAT_VARIANT, [0x25] = INAT_VARIANT, [0x28] = INAT_VARIANT, [0x29] = INAT_VARIANT, [0x2a] = INAT_VARIANT, [0x2b] = INAT_VARIANT, [0x2c] = INAT_VARIANT, [0x2d] = INAT_VARIANT, [0x2e] = INAT_VARIANT, [0x2f] = INAT_VARIANT, [0x30] = INAT_VARIANT, [0x31] = INAT_VARIANT, [0x32] = INAT_VARIANT, [0x33] = INAT_VARIANT, [0x34] = INAT_VARIANT, [0x35] = INAT_VARIANT, [0x36] = INAT_VARIANT, [0x37] = INAT_VARIANT, [0x38] = INAT_VARIANT, [0x39] = INAT_VARIANT, [0x3a] = INAT_VARIANT, [0x3b] = INAT_VARIANT, [0x3c] = INAT_VARIANT, [0x3d] = INAT_VARIANT, [0x3e] = INAT_VARIANT, [0x3f] = INAT_VARIANT, [0x40] = INAT_VARIANT, [0x41] = INAT_VARIANT, [0x45] = INAT_VARIANT, [0x46] = INAT_VARIANT, [0x47] = INAT_VARIANT, [0x58] = INAT_VARIANT, [0x59] = INAT_VARIANT, [0x5a] = INAT_VARIANT, [0x78] = INAT_VARIANT, [0x79] = INAT_VARIANT, [0x80] = INAT_VARIANT, [0x81] = INAT_VARIANT, [0x82] = INAT_VARIANT, [0x8c] = INAT_VARIANT, [0x8e] = INAT_VARIANT, [0x90] = INAT_VARIANT, [0x91] = INAT_VARIANT, [0x92] = INAT_VARIANT, [0x93] = INAT_VARIANT, [0x96] = INAT_VARIANT, [0x97] = INAT_VARIANT, [0x98] = INAT_VARIANT, [0x99] = INAT_VARIANT, [0x9a] = INAT_VARIANT, [0x9b] = INAT_VARIANT, [0x9c] = INAT_VARIANT, [0x9d] = INAT_VARIANT, [0x9e] = INAT_VARIANT, [0x9f] = INAT_VARIANT, [0xa6] = INAT_VARIANT, [0xa7] = INAT_VARIANT, [0xa8] = INAT_VARIANT, [0xa9] = INAT_VARIANT, [0xaa] = INAT_VARIANT, [0xab] = INAT_VARIANT, [0xac] = INAT_VARIANT, [0xad] = INAT_VARIANT, [0xae] = INAT_VARIANT, [0xaf] = INAT_VARIANT, [0xb6] = INAT_VARIANT, [0xb7] = INAT_VARIANT, [0xb8] = INAT_VARIANT, [0xb9] = INAT_VARIANT, [0xba] = INAT_VARIANT, [0xbb] = INAT_VARIANT, [0xbc] = INAT_VARIANT, [0xbd] = INAT_VARIANT, [0xbe] = INAT_VARIANT, [0xbf] = INAT_VARIANT, [0xdb] = INAT_VARIANT, [0xdc] = INAT_VARIANT, [0xdd] = INAT_VARIANT, [0xde] = INAT_VARIANT, [0xdf] = INAT_VARIANT, [0xf0] = INAT_MODRM | INAT_MODRM | INAT_VARIANT, [0xf1] = INAT_MODRM | INAT_MODRM | INAT_VARIANT, [0xf2] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xf3] = INAT_MAKE_GROUP(23), [0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_VARIANT, [0xf6] = INAT_VARIANT, [0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY | INAT_VARIANT, }; const insn_attr_t inat_escape_table_2_1[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MODRM | INAT_VEXOK, [0x01] = INAT_MODRM | INAT_VEXOK, [0x02] = INAT_MODRM | INAT_VEXOK, [0x03] = INAT_MODRM | INAT_VEXOK, [0x04] = INAT_MODRM | INAT_VEXOK, [0x05] = INAT_MODRM | INAT_VEXOK, [0x06] = INAT_MODRM | INAT_VEXOK, [0x07] = INAT_MODRM | INAT_VEXOK, [0x08] = INAT_MODRM | INAT_VEXOK, [0x09] = INAT_MODRM | INAT_VEXOK, [0x0a] = INAT_MODRM | INAT_VEXOK, [0x0b] = INAT_MODRM | INAT_VEXOK, [0x0c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x0d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x0e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x0f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x10] = INAT_MODRM, [0x13] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x14] = INAT_MODRM, [0x15] = INAT_MODRM, [0x16] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x17] = INAT_MODRM | INAT_VEXOK, [0x18] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x19] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x1a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x1c] = INAT_MODRM | INAT_VEXOK, [0x1d] = INAT_MODRM | INAT_VEXOK, [0x1e] = INAT_MODRM | INAT_VEXOK, [0x20] = INAT_MODRM | INAT_VEXOK, [0x21] = INAT_MODRM | INAT_VEXOK, [0x22] = INAT_MODRM | INAT_VEXOK, [0x23] = INAT_MODRM | INAT_VEXOK, [0x24] = INAT_MODRM | INAT_VEXOK, [0x25] = INAT_MODRM | INAT_VEXOK, [0x28] = INAT_MODRM | INAT_VEXOK, [0x29] = INAT_MODRM | INAT_VEXOK, [0x2a] = INAT_MODRM | INAT_VEXOK, [0x2b] = INAT_MODRM | INAT_VEXOK, [0x2c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x2d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x2e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x2f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x30] = INAT_MODRM | INAT_VEXOK, [0x31] = INAT_MODRM | INAT_VEXOK, [0x32] = INAT_MODRM | INAT_VEXOK, [0x33] = INAT_MODRM | INAT_VEXOK, [0x34] = INAT_MODRM | INAT_VEXOK, [0x35] = INAT_MODRM | INAT_VEXOK, [0x36] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x37] = INAT_MODRM | INAT_VEXOK, [0x38] = INAT_MODRM | INAT_VEXOK, [0x39] = INAT_MODRM | INAT_VEXOK, [0x3a] = INAT_MODRM | INAT_VEXOK, [0x3b] = INAT_MODRM | INAT_VEXOK, [0x3c] = INAT_MODRM | INAT_VEXOK, [0x3d] = INAT_MODRM | INAT_VEXOK, [0x3e] = INAT_MODRM | INAT_VEXOK, [0x3f] = INAT_MODRM | INAT_VEXOK, [0x40] = INAT_MODRM | INAT_VEXOK, [0x41] = INAT_MODRM | INAT_VEXOK, [0x45] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x46] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x47] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x58] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x59] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x5a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x78] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x79] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x80] = INAT_MODRM, [0x81] = INAT_MODRM, [0x82] = INAT_MODRM, [0x8c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x8e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x90] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x91] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x92] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x93] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x96] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x97] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x98] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x99] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9a] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9b] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9c] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9d] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9e] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x9f] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xa6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xa7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xa8] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xa9] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xaa] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xab] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xac] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xad] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xae] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xaf] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xb6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xb7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xb8] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xb9] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xba] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbb] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbc] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbd] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbe] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xbf] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xdb] = INAT_MODRM | INAT_VEXOK, [0xdc] = INAT_MODRM | INAT_VEXOK, [0xdd] = INAT_MODRM | INAT_VEXOK, [0xde] = INAT_MODRM | INAT_VEXOK, [0xdf] = INAT_MODRM | INAT_VEXOK, [0xf0] = INAT_MODRM, [0xf1] = INAT_MODRM, [0xf6] = INAT_MODRM, [0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; const insn_attr_t inat_escape_table_2_2[INAT_OPCODE_TABLE_SIZE] = { [0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xf6] = INAT_MODRM, [0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; const insn_attr_t inat_escape_table_2_3[INAT_OPCODE_TABLE_SIZE] = { [0xf0] = INAT_MODRM | INAT_MODRM, [0xf1] = INAT_MODRM | INAT_MODRM, [0xf5] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xf6] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0xf7] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; /* Table: 3-byte opcode 2 (0x0f 0x3a) */ const insn_attr_t inat_escape_table_3[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_VARIANT, [0x01] = INAT_VARIANT, [0x02] = INAT_VARIANT, [0x04] = INAT_VARIANT, [0x05] = INAT_VARIANT, [0x06] = INAT_VARIANT, [0x08] = INAT_VARIANT, [0x09] = INAT_VARIANT, [0x0a] = INAT_VARIANT, [0x0b] = INAT_VARIANT, [0x0c] = INAT_VARIANT, [0x0d] = INAT_VARIANT, [0x0e] = INAT_VARIANT, [0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x14] = INAT_VARIANT, [0x15] = INAT_VARIANT, [0x16] = INAT_VARIANT, [0x17] = INAT_VARIANT, [0x18] = INAT_VARIANT, [0x19] = INAT_VARIANT, [0x1d] = INAT_VARIANT, [0x20] = INAT_VARIANT, [0x21] = INAT_VARIANT, [0x22] = INAT_VARIANT, [0x38] = INAT_VARIANT, [0x39] = INAT_VARIANT, [0x40] = INAT_VARIANT, [0x41] = INAT_VARIANT, [0x42] = INAT_VARIANT, [0x44] = INAT_VARIANT, [0x46] = INAT_VARIANT, [0x4a] = INAT_VARIANT, [0x4b] = INAT_VARIANT, [0x4c] = INAT_VARIANT, [0x60] = INAT_VARIANT, [0x61] = INAT_VARIANT, [0x62] = INAT_VARIANT, [0x63] = INAT_VARIANT, [0xdf] = INAT_VARIANT, [0xf0] = INAT_VARIANT, }; const insn_attr_t inat_escape_table_3_1[INAT_OPCODE_TABLE_SIZE] = { [0x00] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x01] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x02] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x04] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x05] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x06] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x08] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x09] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0c] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0d] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0e] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x0f] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x14] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x15] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x16] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x17] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x18] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x19] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x1d] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x20] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x21] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x22] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x38] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x39] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x40] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x41] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x42] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x44] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x46] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x4a] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x4b] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x4c] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x60] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x61] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x62] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x63] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0xdf] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, }; const insn_attr_t inat_escape_table_3_3[INAT_OPCODE_TABLE_SIZE] = { [0xf0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; /* GrpTable: Grp1 */ /* GrpTable: Grp1A */ /* GrpTable: Grp2 */ /* GrpTable: Grp3_1 */ const insn_attr_t inat_group_table_6[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM, [0x5] = INAT_MODRM, [0x6] = INAT_MODRM, [0x7] = INAT_MODRM, }; /* GrpTable: Grp3_2 */ const insn_attr_t inat_group_table_7[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM, [0x5] = INAT_MODRM, [0x6] = INAT_MODRM, [0x7] = INAT_MODRM, }; /* GrpTable: Grp4 */ const insn_attr_t inat_group_table_8[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, }; /* GrpTable: Grp5 */ const insn_attr_t inat_group_table_9[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM | INAT_FORCE64, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM | INAT_FORCE64, [0x5] = INAT_MODRM, [0x6] = INAT_MODRM | INAT_FORCE64, }; /* GrpTable: Grp6 */ const insn_attr_t inat_group_table_10[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM, [0x5] = INAT_MODRM, }; /* GrpTable: Grp7 */ const insn_attr_t inat_group_table_11[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, [0x4] = INAT_MODRM, [0x6] = INAT_MODRM, [0x7] = INAT_MODRM, }; /* GrpTable: Grp8 */ /* GrpTable: Grp9 */ const insn_attr_t inat_group_table_22[INAT_GROUP_TABLE_SIZE] = { [0x1] = INAT_MODRM, [0x6] = INAT_MODRM | INAT_MODRM | INAT_VARIANT, [0x7] = INAT_MODRM | INAT_MODRM | INAT_VARIANT, }; const insn_attr_t inat_group_table_22_1[INAT_GROUP_TABLE_SIZE] = { [0x6] = INAT_MODRM, }; const insn_attr_t inat_group_table_22_2[INAT_GROUP_TABLE_SIZE] = { [0x6] = INAT_MODRM, [0x7] = INAT_MODRM, }; /* GrpTable: Grp10 */ /* GrpTable: Grp11A */ const insn_attr_t inat_group_table_4[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM, [0x7] = INAT_MAKE_IMM(INAT_IMM_BYTE), }; /* GrpTable: Grp11B */ const insn_attr_t inat_group_table_5[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MAKE_IMM(INAT_IMM_VWORD32) | INAT_MODRM, [0x7] = INAT_MAKE_IMM(INAT_IMM_VWORD32), }; /* GrpTable: Grp12 */ const insn_attr_t inat_group_table_14[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, }; const insn_attr_t inat_group_table_14_1[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, }; /* GrpTable: Grp13 */ const insn_attr_t inat_group_table_15[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, }; const insn_attr_t inat_group_table_15_1[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x4] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, }; /* GrpTable: Grp14 */ const insn_attr_t inat_group_table_16[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x3] = INAT_VARIANT, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VARIANT, [0x7] = INAT_VARIANT, }; const insn_attr_t inat_group_table_16_1[INAT_GROUP_TABLE_SIZE] = { [0x2] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x3] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x6] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, [0x7] = INAT_MAKE_IMM(INAT_IMM_BYTE) | INAT_MODRM | INAT_VEXOK, }; /* GrpTable: Grp15 */ const insn_attr_t inat_group_table_19[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_VARIANT, [0x1] = INAT_VARIANT, [0x2] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, [0x3] = INAT_MODRM | INAT_VEXOK | INAT_VARIANT, }; const insn_attr_t inat_group_table_19_2[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, }; /* GrpTable: Grp16 */ const insn_attr_t inat_group_table_13[INAT_GROUP_TABLE_SIZE] = { [0x0] = INAT_MODRM, [0x1] = INAT_MODRM, [0x2] = INAT_MODRM, [0x3] = INAT_MODRM, }; /* GrpTable: Grp17 */ const insn_attr_t inat_group_table_23[INAT_GROUP_TABLE_SIZE] = { [0x1] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x2] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, [0x3] = INAT_MODRM | INAT_VEXOK | INAT_VEXONLY, }; /* GrpTable: GrpP */ /* GrpTable: GrpPDLK */ /* GrpTable: GrpRNG */ /* Escape opcode map array */ const insn_attr_t * const inat_escape_tables[INAT_ESC_MAX + 1][INAT_LSTPFX_MAX + 1] = { [1][0] = inat_escape_table_1, [1][1] = inat_escape_table_1_1, [1][2] = inat_escape_table_1_2, [1][3] = inat_escape_table_1_3, [2][0] = inat_escape_table_2, [2][1] = inat_escape_table_2_1, [2][2] = inat_escape_table_2_2, [2][3] = inat_escape_table_2_3, [3][0] = inat_escape_table_3, [3][1] = inat_escape_table_3_1, [3][3] = inat_escape_table_3_3, }; /* Group opcode map array */ const insn_attr_t * const inat_group_tables[INAT_GRP_MAX + 1][INAT_LSTPFX_MAX + 1] = { [4][0] = inat_group_table_4, [5][0] = inat_group_table_5, [6][0] = inat_group_table_6, [7][0] = inat_group_table_7, [8][0] = inat_group_table_8, [9][0] = inat_group_table_9, [10][0] = inat_group_table_10, [11][0] = inat_group_table_11, [13][0] = inat_group_table_13, [14][0] = inat_group_table_14, [14][1] = inat_group_table_14_1, [15][0] = inat_group_table_15, [15][1] = inat_group_table_15_1, [16][0] = inat_group_table_16, [16][1] = inat_group_table_16_1, [19][0] = inat_group_table_19, [19][2] = inat_group_table_19_2, [22][0] = inat_group_table_22, [22][1] = inat_group_table_22_1, [22][2] = inat_group_table_22_2, [23][0] = inat_group_table_23, }; /* AVX opcode map array */ const insn_attr_t * const inat_avx_tables[X86_VEX_M_MAX + 1][INAT_LSTPFX_MAX + 1] = { [1][0] = inat_escape_table_1, [1][1] = inat_escape_table_1_1, [1][2] = inat_escape_table_1_2, [1][3] = inat_escape_table_1_3, [2][0] = inat_escape_table_2, [2][1] = inat_escape_table_2_1, [2][2] = inat_escape_table_2_2, [2][3] = inat_escape_table_2_3, [3][0] = inat_escape_table_3, [3][1] = inat_escape_table_3_1, [3][3] = inat_escape_table_3_3, }; kpatch-0.3.2/kpatch-build/insn/inat.c000066400000000000000000000051021266116401600174040ustar00rootroot00000000000000/* * x86 instruction attribute tables * * Written by Masami Hiramatsu * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * */ #include /* Attribute tables are generated from opcode map */ #include "inat-tables.c" /* Attribute search APIs */ insn_attr_t inat_get_opcode_attribute(insn_byte_t opcode) { return inat_primary_table[opcode]; } int inat_get_last_prefix_id(insn_byte_t last_pfx) { insn_attr_t lpfx_attr; lpfx_attr = inat_get_opcode_attribute(last_pfx); return inat_last_prefix_id(lpfx_attr); } insn_attr_t inat_get_escape_attribute(insn_byte_t opcode, int lpfx_id, insn_attr_t esc_attr) { const insn_attr_t *table; int n; n = inat_escape_id(esc_attr); table = inat_escape_tables[n][0]; if (!table) return 0; if (inat_has_variant(table[opcode]) && lpfx_id) { table = inat_escape_tables[n][lpfx_id]; if (!table) return 0; } return table[opcode]; } insn_attr_t inat_get_group_attribute(insn_byte_t modrm, int lpfx_id, insn_attr_t grp_attr) { const insn_attr_t *table; int n; n = inat_group_id(grp_attr); table = inat_group_tables[n][0]; if (!table) return inat_group_common_attribute(grp_attr); if (inat_has_variant(table[X86_MODRM_REG(modrm)]) && lpfx_id) { table = inat_group_tables[n][lpfx_id]; if (!table) return inat_group_common_attribute(grp_attr); } return table[X86_MODRM_REG(modrm)] | inat_group_common_attribute(grp_attr); } insn_attr_t inat_get_avx_attribute(insn_byte_t opcode, insn_byte_t vex_m, insn_byte_t vex_p) { const insn_attr_t *table; if (vex_m > X86_VEX_M_MAX || vex_p > INAT_LSTPFX_MAX) return 0; /* At first, this checks the master table */ table = inat_avx_tables[vex_m][0]; if (!table) return 0; if (!inat_is_group(table[opcode]) && vex_p) { /* If this is not a group, get attribute directly */ table = inat_avx_tables[vex_m][vex_p]; if (!table) return 0; } return table[opcode]; } kpatch-0.3.2/kpatch-build/insn/insn.c000066400000000000000000000345471266116401600174370ustar00rootroot00000000000000/* * x86 instruction analysis * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. * * Copyright (C) IBM Corporation, 2002, 2004, 2009 */ #ifdef __KERNEL__ #include #else #include #endif #include #include #define unlikely(a) a /* Verify next sizeof(t) bytes can be on the same instruction */ #define validate_next(t, insn, n) \ ((insn)->next_byte + sizeof(t) + n - (insn)->kaddr <= MAX_INSN_SIZE) #define __get_next(t, insn) \ ({ t r = *(t*)insn->next_byte; insn->next_byte += sizeof(t); r; }) #define __peek_nbyte_next(t, insn, n) \ ({ t r = *(t*)((insn)->next_byte + n); r; }) #define get_next(t, insn) \ ({ if (unlikely(!validate_next(t, insn, 0))) goto err_out; __get_next(t, insn); }) #define peek_nbyte_next(t, insn, n) \ ({ if (unlikely(!validate_next(t, insn, n))) goto err_out; __peek_nbyte_next(t, insn, n); }) #define peek_next(t, insn) peek_nbyte_next(t, insn, 0) /** * insn_init() - initialize struct insn * @insn: &struct insn to be initialized * @kaddr: address (in kernel memory) of instruction (or copy thereof) * @x86_64: !0 for 64-bit kernel or 64-bit app */ void insn_init(struct insn *insn, const void *kaddr, int x86_64) { memset(insn, 0, sizeof(*insn)); insn->kaddr = kaddr; insn->next_byte = kaddr; insn->x86_64 = x86_64 ? 1 : 0; insn->opnd_bytes = 4; if (x86_64) insn->addr_bytes = 8; else insn->addr_bytes = 4; } /** * insn_get_prefixes - scan x86 instruction prefix bytes * @insn: &struct insn containing instruction * * Populates the @insn->prefixes bitmap, and updates @insn->next_byte * to point to the (first) opcode. No effect if @insn->prefixes.got * is already set. */ void insn_get_prefixes(struct insn *insn) { struct insn_field *prefixes = &insn->prefixes; insn_attr_t attr; insn_byte_t b, lb; int i, nb; if (prefixes->got) return; nb = 0; lb = 0; b = peek_next(insn_byte_t, insn); attr = inat_get_opcode_attribute(b); while (inat_is_legacy_prefix(attr)) { /* Skip if same prefix */ for (i = 0; i < nb; i++) if (prefixes->bytes[i] == b) goto found; if (nb == 4) /* Invalid instruction */ break; prefixes->bytes[nb++] = b; if (inat_is_address_size_prefix(attr)) { /* address size switches 2/4 or 4/8 */ if (insn->x86_64) insn->addr_bytes ^= 12; else insn->addr_bytes ^= 6; } else if (inat_is_operand_size_prefix(attr)) { /* oprand size switches 2/4 */ insn->opnd_bytes ^= 6; } found: prefixes->nbytes++; insn->next_byte++; lb = b; b = peek_next(insn_byte_t, insn); attr = inat_get_opcode_attribute(b); } /* Set the last prefix */ if (lb && lb != insn->prefixes.bytes[3]) { if (unlikely(insn->prefixes.bytes[3])) { /* Swap the last prefix */ b = insn->prefixes.bytes[3]; for (i = 0; i < nb; i++) if (prefixes->bytes[i] == lb) prefixes->bytes[i] = b; } insn->prefixes.bytes[3] = lb; } /* Decode REX prefix */ if (insn->x86_64) { b = peek_next(insn_byte_t, insn); attr = inat_get_opcode_attribute(b); if (inat_is_rex_prefix(attr)) { insn->rex_prefix.value = b; insn->rex_prefix.nbytes = 1; insn->next_byte++; if (X86_REX_W(b)) /* REX.W overrides opnd_size */ insn->opnd_bytes = 8; } } insn->rex_prefix.got = 1; /* Decode VEX prefix */ b = peek_next(insn_byte_t, insn); attr = inat_get_opcode_attribute(b); if (inat_is_vex_prefix(attr)) { insn_byte_t b2 = peek_nbyte_next(insn_byte_t, insn, 1); if (!insn->x86_64) { /* * In 32-bits mode, if the [7:6] bits (mod bits of * ModRM) on the second byte are not 11b, it is * LDS or LES. */ if (X86_MODRM_MOD(b2) != 3) goto vex_end; } insn->vex_prefix.bytes[0] = b; insn->vex_prefix.bytes[1] = b2; if (inat_is_vex3_prefix(attr)) { b2 = peek_nbyte_next(insn_byte_t, insn, 2); insn->vex_prefix.bytes[2] = b2; insn->vex_prefix.nbytes = 3; insn->next_byte += 3; if (insn->x86_64 && X86_VEX_W(b2)) /* VEX.W overrides opnd_size */ insn->opnd_bytes = 8; } else { insn->vex_prefix.nbytes = 2; insn->next_byte += 2; } } vex_end: insn->vex_prefix.got = 1; prefixes->got = 1; err_out: return; } /** * insn_get_opcode - collect opcode(s) * @insn: &struct insn containing instruction * * Populates @insn->opcode, updates @insn->next_byte to point past the * opcode byte(s), and set @insn->attr (except for groups). * If necessary, first collects any preceding (prefix) bytes. * Sets @insn->opcode.value = opcode1. No effect if @insn->opcode.got * is already 1. */ void insn_get_opcode(struct insn *insn) { struct insn_field *opcode = &insn->opcode; insn_byte_t op; int pfx_id; if (opcode->got) return; if (!insn->prefixes.got) insn_get_prefixes(insn); /* Get first opcode */ op = get_next(insn_byte_t, insn); opcode->bytes[0] = op; opcode->nbytes = 1; /* Check if there is VEX prefix or not */ if (insn_is_avx(insn)) { insn_byte_t m, p; m = insn_vex_m_bits(insn); p = insn_vex_p_bits(insn); insn->attr = inat_get_avx_attribute(op, m, p); if (!inat_accept_vex(insn->attr) && !inat_is_group(insn->attr)) insn->attr = 0; /* This instruction is bad */ goto end; /* VEX has only 1 byte for opcode */ } insn->attr = inat_get_opcode_attribute(op); while (inat_is_escape(insn->attr)) { /* Get escaped opcode */ op = get_next(insn_byte_t, insn); opcode->bytes[opcode->nbytes++] = op; pfx_id = insn_last_prefix_id(insn); insn->attr = inat_get_escape_attribute(op, pfx_id, insn->attr); } if (inat_must_vex(insn->attr)) insn->attr = 0; /* This instruction is bad */ end: opcode->got = 1; err_out: return; } /** * insn_get_modrm - collect ModRM byte, if any * @insn: &struct insn containing instruction * * Populates @insn->modrm and updates @insn->next_byte to point past the * ModRM byte, if any. If necessary, first collects the preceding bytes * (prefixes and opcode(s)). No effect if @insn->modrm.got is already 1. */ void insn_get_modrm(struct insn *insn) { struct insn_field *modrm = &insn->modrm; insn_byte_t pfx_id, mod; if (modrm->got) return; if (!insn->opcode.got) insn_get_opcode(insn); if (inat_has_modrm(insn->attr)) { mod = get_next(insn_byte_t, insn); modrm->value = mod; modrm->nbytes = 1; if (inat_is_group(insn->attr)) { pfx_id = insn_last_prefix_id(insn); insn->attr = inat_get_group_attribute(mod, pfx_id, insn->attr); if (insn_is_avx(insn) && !inat_accept_vex(insn->attr)) insn->attr = 0; /* This is bad */ } } if (insn->x86_64 && inat_is_force64(insn->attr)) insn->opnd_bytes = 8; modrm->got = 1; err_out: return; } /** * insn_rip_relative() - Does instruction use RIP-relative addressing mode? * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * ModRM byte. No effect if @insn->x86_64 is 0. */ int insn_rip_relative(struct insn *insn) { struct insn_field *modrm = &insn->modrm; if (!insn->x86_64) return 0; if (!modrm->got) insn_get_modrm(insn); /* * For rip-relative instructions, the mod field (top 2 bits) * is zero and the r/m field (bottom 3 bits) is 0x5. */ return (modrm->nbytes && (modrm->value & 0xc7) == 0x5); } /** * insn_get_sib() - Get the SIB byte of instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * ModRM byte. */ void insn_get_sib(struct insn *insn) { insn_byte_t modrm; if (insn->sib.got) return; if (!insn->modrm.got) insn_get_modrm(insn); if (insn->modrm.nbytes) { modrm = (insn_byte_t)insn->modrm.value; if (insn->addr_bytes != 2 && X86_MODRM_MOD(modrm) != 3 && X86_MODRM_RM(modrm) == 4) { insn->sib.value = get_next(insn_byte_t, insn); insn->sib.nbytes = 1; } } insn->sib.got = 1; err_out: return; } /** * insn_get_displacement() - Get the displacement of instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * SIB byte. * Displacement value is sign-expanded. */ void insn_get_displacement(struct insn *insn) { insn_byte_t mod, rm, base; if (insn->displacement.got) return; if (!insn->sib.got) insn_get_sib(insn); if (insn->modrm.nbytes) { /* * Interpreting the modrm byte: * mod = 00 - no displacement fields (exceptions below) * mod = 01 - 1-byte displacement field * mod = 10 - displacement field is 4 bytes, or 2 bytes if * address size = 2 (0x67 prefix in 32-bit mode) * mod = 11 - no memory operand * * If address size = 2... * mod = 00, r/m = 110 - displacement field is 2 bytes * * If address size != 2... * mod != 11, r/m = 100 - SIB byte exists * mod = 00, SIB base = 101 - displacement field is 4 bytes * mod = 00, r/m = 101 - rip-relative addressing, displacement * field is 4 bytes */ mod = X86_MODRM_MOD(insn->modrm.value); rm = X86_MODRM_RM(insn->modrm.value); base = X86_SIB_BASE(insn->sib.value); if (mod == 3) goto out; if (mod == 1) { insn->displacement.value = get_next(char, insn); insn->displacement.nbytes = 1; } else if (insn->addr_bytes == 2) { if ((mod == 0 && rm == 6) || mod == 2) { insn->displacement.value = get_next(short, insn); insn->displacement.nbytes = 2; } } else { if ((mod == 0 && rm == 5) || mod == 2 || (mod == 0 && base == 5)) { insn->displacement.value = get_next(int, insn); insn->displacement.nbytes = 4; } } } out: insn->displacement.got = 1; err_out: return; } /* Decode moffset16/32/64. Return 0 if failed */ static int __get_moffset(struct insn *insn) { switch (insn->addr_bytes) { case 2: insn->moffset1.value = get_next(short, insn); insn->moffset1.nbytes = 2; break; case 4: insn->moffset1.value = get_next(int, insn); insn->moffset1.nbytes = 4; break; case 8: insn->moffset1.value = get_next(int, insn); insn->moffset1.nbytes = 4; insn->moffset2.value = get_next(int, insn); insn->moffset2.nbytes = 4; break; default: /* opnd_bytes must be modified manually */ goto err_out; } insn->moffset1.got = insn->moffset2.got = 1; return 1; err_out: return 0; } /* Decode imm v32(Iz). Return 0 if failed */ static int __get_immv32(struct insn *insn) { switch (insn->opnd_bytes) { case 2: insn->immediate.value = get_next(short, insn); insn->immediate.nbytes = 2; break; case 4: case 8: insn->immediate.value = get_next(int, insn); insn->immediate.nbytes = 4; break; default: /* opnd_bytes must be modified manually */ goto err_out; } return 1; err_out: return 0; } /* Decode imm v64(Iv/Ov), Return 0 if failed */ static int __get_immv(struct insn *insn) { switch (insn->opnd_bytes) { case 2: insn->immediate1.value = get_next(short, insn); insn->immediate1.nbytes = 2; break; case 4: insn->immediate1.value = get_next(int, insn); insn->immediate1.nbytes = 4; break; case 8: insn->immediate1.value = get_next(int, insn); insn->immediate1.nbytes = 4; insn->immediate2.value = get_next(int, insn); insn->immediate2.nbytes = 4; break; default: /* opnd_bytes must be modified manually */ goto err_out; } insn->immediate1.got = insn->immediate2.got = 1; return 1; err_out: return 0; } /* Decode ptr16:16/32(Ap) */ static int __get_immptr(struct insn *insn) { switch (insn->opnd_bytes) { case 2: insn->immediate1.value = get_next(short, insn); insn->immediate1.nbytes = 2; break; case 4: insn->immediate1.value = get_next(int, insn); insn->immediate1.nbytes = 4; break; case 8: /* ptr16:64 is not exist (no segment) */ return 0; default: /* opnd_bytes must be modified manually */ goto err_out; } insn->immediate2.value = get_next(unsigned short, insn); insn->immediate2.nbytes = 2; insn->immediate1.got = insn->immediate2.got = 1; return 1; err_out: return 0; } /** * insn_get_immediate() - Get the immediates of instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * displacement bytes. * Basically, most of immediates are sign-expanded. Unsigned-value can be * get by bit masking with ((1 << (nbytes * 8)) - 1) */ void insn_get_immediate(struct insn *insn) { if (insn->immediate.got) return; if (!insn->displacement.got) insn_get_displacement(insn); if (inat_has_moffset(insn->attr)) { if (!__get_moffset(insn)) goto err_out; goto done; } if (!inat_has_immediate(insn->attr)) /* no immediates */ goto done; switch (inat_immediate_size(insn->attr)) { case INAT_IMM_BYTE: insn->immediate.value = get_next(char, insn); insn->immediate.nbytes = 1; break; case INAT_IMM_WORD: insn->immediate.value = get_next(short, insn); insn->immediate.nbytes = 2; break; case INAT_IMM_DWORD: insn->immediate.value = get_next(int, insn); insn->immediate.nbytes = 4; break; case INAT_IMM_QWORD: insn->immediate1.value = get_next(int, insn); insn->immediate1.nbytes = 4; insn->immediate2.value = get_next(int, insn); insn->immediate2.nbytes = 4; break; case INAT_IMM_PTR: if (!__get_immptr(insn)) goto err_out; break; case INAT_IMM_VWORD32: if (!__get_immv32(insn)) goto err_out; break; case INAT_IMM_VWORD: if (!__get_immv(insn)) goto err_out; break; default: /* Here, insn must have an immediate, but failed */ goto err_out; } if (inat_has_second_immediate(insn->attr)) { insn->immediate2.value = get_next(char, insn); insn->immediate2.nbytes = 1; } done: insn->immediate.got = 1; err_out: return; } /** * insn_get_length() - Get the length of instruction * @insn: &struct insn containing instruction * * If necessary, first collects the instruction up to and including the * immediates bytes. */ void insn_get_length(struct insn *insn) { if (insn->length) return; if (!insn->immediate.got) insn_get_immediate(insn); insn->length = (unsigned char)((unsigned long)insn->next_byte - (unsigned long)insn->kaddr); } kpatch-0.3.2/kpatch-build/kpatch-build000077500000000000000000000440321266116401600176400ustar00rootroot00000000000000#!/bin/bash # # kpatch build script # # Copyright (C) 2014 Seth Jennings # Copyright (C) 2013,2014 Josh Poimboeuf # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, # 02110-1301, USA. # This script takes a patch based on the version of the kernel # currently running and creates a kernel module that will # replace modified functions in the kernel such that the # patched code takes effect. # This script: # - Either uses a specified kernel source directory or downloads the kernel # source package for the currently running kernel # - Unpacks and prepares the source package for building if necessary # - Builds the base kernel (vmlinux) # - Builds the patched kernel and monitors changed objects # - Builds the patched objects with gcc flags -f[function|data]-sections # - Runs kpatch tools to create and link the patch kernel module BASE="$PWD" SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" ARCHVERSION="$(uname -r)" CPUS="$(getconf _NPROCESSORS_ONLN)" CACHEDIR="${CACHEDIR:-$HOME/.kpatch}" SRCDIR="$CACHEDIR/src" OBJDIR="$CACHEDIR/obj" RPMTOPDIR="$CACHEDIR/buildroot" VERSIONFILE="$CACHEDIR/version" TEMPDIR="$CACHEDIR/tmp" LOGFILE="$CACHEDIR/build.log" APPLIEDPATCHFILE="kpatch.patch" DEBUG=0 SKIPGCCCHECK=0 warn() { echo "ERROR: $1" >&2 } die() { if [[ -z $1 ]]; then msg="kpatch build failed" else msg="$1" fi if [[ -e $LOGFILE ]]; then warn "$msg. Check $LOGFILE for more details." else warn "$msg." fi exit 1 } cleanup() { rm -f "$SRCDIR/.scmversion" if [[ -e "$SRCDIR/$APPLIEDPATCHFILE" ]]; then patch -p1 -R -d "$SRCDIR" < "$SRCDIR/$APPLIEDPATCHFILE" &> /dev/null rm -f "$SRCDIR/$APPLIEDPATCHFILE" fi if [[ -n $USERSRCDIR ]]; then # restore original .config and vmlinux since they were removed # with mrproper [[ -e $TEMPDIR/vmlinux ]] && cp -f $TEMPDIR/vmlinux $USERSRCDIR [[ -e $TEMPDIR/.config ]] && cp -f $TEMPDIR/.config $USERSRCDIR fi [[ "$DEBUG" -eq 0 ]] && rm -rf "$TEMPDIR" rm -rf "$RPMTOPDIR" unset KCFLAGS } clean_cache() { [[ -z $USERSRCDIR ]] && rm -rf "$SRCDIR" rm -rf "$OBJDIR" "$VERSIONFILE" mkdir -p "$OBJDIR" } find_dirs() { if [[ -e "$SCRIPTDIR/create-diff-object" ]]; then # git repo TOOLSDIR="$SCRIPTDIR" DATADIR="$(readlink -f $SCRIPTDIR/../kmod)" SYMVERSFILE="$DATADIR/core/Module.symvers" return elif [[ -e "$SCRIPTDIR/../libexec/kpatch/create-diff-object" ]]; then # installation path TOOLSDIR="$(readlink -f $SCRIPTDIR/../libexec/kpatch)" DATADIR="$(readlink -f $SCRIPTDIR/../share/kpatch)" if [[ -e $SCRIPTDIR/../lib/kpatch/$ARCHVERSION/Module.symvers ]]; then SYMVERSFILE="$(readlink -f $SCRIPTDIR/../lib/kpatch/$ARCHVERSION/Module.symvers)" elif [[ -e /lib/modules/$ARCHVERSION/extra/kpatch/Module.symvers ]]; then SYMVERSFILE="$(readlink -f /lib/modules/$ARCHVERSION/extra/kpatch/Module.symvers)" else warn "unable to find Module.symvers for kpatch core module" return 1 fi return fi return 1 } gcc_version_check() { # ensure gcc version matches that used to build the kernel local gccver=$(gcc --version |head -n1 |cut -d' ' -f3-) local kgccver=$(readelf -p .comment $VMLINUX |grep GCC: | tr -s ' ' | cut -d ' ' -f6-) if [[ $gccver != $kgccver ]]; then warn "gcc/kernel version mismatch" echo "gcc version: $gccver" echo "kernel version: $kgccver" echo "Install the matching gcc version (recommended) or use --skip-gcc-check" echo "to skip the version matching enforcement (not recommended)" return 1 fi # ensure gcc version is >= 4.8 gccver=$(echo $gccver |cut -d'.' -f1,2) if [[ $gccver < 4.8 ]]; then warn "gcc >= 4.8 required" return 1 fi return } find_parent_obj() { dir=$(dirname $1) absdir=$(readlink -f $dir) pwddir=$(readlink -f .) pdir=${absdir#$pwddir/} file=$(basename $1) grepname=${1%.o} grepname=$grepname\\\.o if [[ $DEEP_FIND -eq 1 ]]; then num=0 if [[ -n $last_deep_find ]]; then parent=$(grep -l $grepname $last_deep_find/.*.cmd | grep -v $pdir/.${file}.cmd |head -n1) num=$(grep -l $grepname $last_deep_find/.*.cmd | grep -v $pdir/.${file}.cmd |wc -l) fi if [[ $num -eq 0 ]]; then parent=$(find * -name ".*.cmd" | xargs grep -l $grepname | grep -v $pdir/.${file}.cmd |head -n1) num=$(find * -name ".*.cmd" | xargs grep -l $grepname | grep -v $pdir/.${file}.cmd | wc -l) [[ $num -eq 1 ]] && last_deep_find=$(dirname $parent) fi else parent=$(grep -l $grepname $dir/.*.cmd | grep -v $dir/.${file}.cmd |head -n1) num=$(grep -l $grepname $dir/.*.cmd | grep -v $dir/.${file}.cmd | wc -l) fi [[ $num -eq 0 ]] && PARENT="" && return [[ $num -gt 1 ]] && ERROR_IF_DIFF="two parent matches for $1" dir=$(dirname $parent) PARENT=$(basename $parent) PARENT=${PARENT#.} PARENT=${PARENT%.cmd} PARENT=$dir/$PARENT [[ ! -e $PARENT ]] && die "ERROR: can't find parent $PARENT for $1" } find_kobj() { arg=$1 KOBJFILE=$arg DEEP_FIND=0 ERROR_IF_DIFF= while true; do find_parent_obj $KOBJFILE [[ -n $PARENT ]] && DEEP_FIND=0 if [[ -z $PARENT ]]; then [[ $KOBJFILE = *.ko ]] && return case $KOBJFILE in */built-in.o|\ arch/x86/lib/lib.a|\ arch/x86/kernel/head*.o|\ lib/lib.a) KOBJFILE=vmlinux return esac if [[ $DEEP_FIND -eq 0 ]]; then DEEP_FIND=1 continue; fi die "invalid ancestor $KOBJFILE for $arg" fi KOBJFILE=$PARENT done } usage() { echo "usage: $(basename $0) [options] " >&2 echo " -h, --help Show this help message" >&2 echo " -r, --sourcerpm Specify kernel source RPM" >&2 echo " -s, --sourcedir Specify kernel source directory" >&2 echo " -c, --config Specify kernel config file" >&2 echo " -v, --vmlinux Specify original vmlinux" >&2 echo " -t, --target Specify custom kernel build targets" >&2 echo " -d, --debug Keep scratch files in /tmp" >&2 echo " --skip-gcc-check Skip gcc version matching check" >&2 echo " (not recommended)" >&2 } options=$(getopt -o hr:s:c:v:t:d -l "help,sourcerpm:,sourcedir:,config:,vmlinux:,target:,debug,skip-gcc-check" -- "$@") || die "getopt failed" eval set -- "$options" while [[ $# -gt 0 ]]; do case "$1" in -h|--help) usage exit 0 ;; -r|--sourcerpm) SRCRPM=$(readlink -f "$2") shift [[ ! -f "$SRCRPM" ]] && die "source rpm $SRCRPM not found" rpmname=$(basename "$SRCRPM") ARCHVERSION=${rpmname%.src.rpm}.$(uname -m) ARCHVERSION=${ARCHVERSION#kernel-} ;; -s|--sourcedir) USERSRCDIR=$(readlink -f "$2") shift [[ ! -d "$USERSRCDIR" ]] && die "source dir $USERSRCDIR not found" ;; -c|--config) CONFIGFILE=$(readlink -f "$2") shift [[ ! -f "$CONFIGFILE" ]] && die "config file $CONFIGFILE not found" ;; -v|--vmlinux) VMLINUX=$(readlink -f "$2") shift [[ ! -f "$VMLINUX" ]] && die "vmlinux file $VMLINUX not found" ;; -t|--target) TARGETS="$TARGETS $2" shift ;; -d|--debug) echo "DEBUG mode enabled" DEBUG=1 set -o xtrace ;; --skip-gcc-check) echo "WARNING: Skipping gcc version matching check (not recommended)" SKIPGCCCHECK=1 ;; --) if [[ -z "$2" ]]; then warn "no patch file specified" usage exit 1 fi PATCHFILE=$(readlink -f "$2") [[ ! -f "$PATCHFILE" ]] && die "patch file $PATCHFILE not found" break ;; esac shift done # ensure cachedir and tempdir are setup properly and cleaned mkdir -p "$TEMPDIR" || die "Couldn't create $TEMPDIR" rm -rf "$TEMPDIR"/* rm -f "$LOGFILE" trap cleanup EXIT INT TERM HUP if [[ -n $USERSRCDIR ]]; then # save .config and vmlinux since they'll get removed with mrproper so # we can restore them later and be able to run kpatch-build multiple # times on the same sourcedir [[ -z $CONFIGFILE ]] && CONFIGFILE="$USERSRCDIR"/.config [[ ! -e "$CONFIGFILE" ]] && die "can't find config file" [[ "$CONFIGFILE" = "$USERSRCDIR"/.config ]] && cp -f "$CONFIGFILE" $TEMPDIR [[ -z $VMLINUX ]] && VMLINUX="$USERSRCDIR"/vmlinux [[ ! -e "$VMLINUX" ]] && die "can't find vmlinux" [[ "$VMLINUX" = "$USERSRCDIR"/vmlinux ]] && cp -f "$VMLINUX" $TEMPDIR/vmlinux && VMLINUX=$TEMPDIR/vmlinux # Extract the target kernel version from vmlinux in this case. ARCHVERSION=$(strings "$VMLINUX" | grep -e "^Linux version" | awk '{ print($3); }') fi KVER=${ARCHVERSION%%-*} if [[ $ARCHVERSION =~ - ]]; then KREL=${ARCHVERSION##*-} KREL=${KREL%.*} fi [[ -z $TARGETS ]] && TARGETS="vmlinux modules" PATCHNAME=$(basename "$PATCHFILE") if [[ "$PATCHNAME" =~ \.patch ]] || [[ "$PATCHNAME" =~ \.diff ]]; then PATCHNAME="${PATCHNAME%.*}" fi # Only allow alphanumerics and '_' and '-' in the module name. Everything else # is replaced with '-'. Also truncate to 48 chars so the full name fits in the # kernel's 56-byte module name array. PATCHNAME=$(echo ${PATCHNAME//[^a-zA-Z0-9_-]/-} |cut -c 1-48) source /etc/os-release DISTRO=$ID if [[ $DISTRO = fedora ]] || [[ $DISTRO = rhel ]] || [[ $DISTRO = ol ]] || [[ $DISTRO = centos ]]; then [[ -z $VMLINUX ]] && VMLINUX=/usr/lib/debug/lib/modules/$ARCHVERSION/vmlinux [[ -e "$VMLINUX" ]] || die "kernel-debuginfo-$ARCHVERSION not installed" export PATH=/usr/lib64/ccache:$PATH elif [[ $DISTRO = ubuntu ]] || [[ $DISTRO = debian ]]; then [[ -z $VMLINUX ]] && VMLINUX=/usr/lib/debug/boot/vmlinux-$ARCHVERSION if [[ $DISTRO = ubuntu ]]; then [[ -e "$VMLINUX" ]] || die "linux-image-$ARCHVERSION-dbgsym not installed" elif [[ $DISTRO = debian ]]; then [[ -e "$VMLINUX" ]] || die "linux-image-$ARCHVERSION-dbg not installed" fi export PATH=/usr/lib/ccache:$PATH fi find_dirs || die "can't find supporting tools" [[ -e "$SYMVERSFILE" ]] || die "can't find core module Module.symvers" if [[ $SKIPGCCCHECK -eq 0 ]]; then gcc_version_check || die fi if [[ -n "$USERSRCDIR" ]]; then echo "Using source directory at $USERSRCDIR" SRCDIR="$USERSRCDIR" clean_cache cp -f "$CONFIGFILE" "$OBJDIR/.config" elif [[ -e "$SRCDIR" ]] && [[ -e "$VERSIONFILE" ]] && [[ $(cat "$VERSIONFILE") = $ARCHVERSION ]]; then echo "Using cache at $SRCDIR" else if [[ $DISTRO = fedora ]] || [[ $DISTRO = rhel ]] || [[ $DISTRO = ol ]] || [[ $DISTRO = centos ]]; then echo "Fedora/Red Hat distribution detected" rpm -q --quiet rpmdevtools || die "rpmdevtools not installed" echo "Downloading kernel source for $ARCHVERSION" if [[ -z "$SRCRPM" ]]; then if [[ $DISTRO = fedora ]]; then wget -P $TEMPDIR http://kojipkgs.fedoraproject.org/packages/kernel/$KVER/$KREL/src/kernel-$KVER-$KREL.src.rpm >> "$LOGFILE" 2>&1 || die else rpm -q --quiet yum-utils || die "yum-utils not installed" yumdownloader --source --destdir "$TEMPDIR" "kernel-$ARCHVERSION" >> "$LOGFILE" 2>&1 || die fi SRCRPM="$TEMPDIR/kernel-$KVER-$KREL.src.rpm" fi echo "Unpacking kernel source" clean_cache rpm -D "_topdir $RPMTOPDIR" -ivh "$SRCRPM" >> "$LOGFILE" 2>&1 || die rpmbuild -D "_topdir $RPMTOPDIR" -bp "--target=$(uname -m)" "$RPMTOPDIR"/SPECS/kernel.spec >> "$LOGFILE" 2>&1 || die "rpmbuild -bp failed. you may need to run 'yum-builddep kernel' first." mv "$RPMTOPDIR"/BUILD/kernel-*/linux-"${ARCHVERSION%.*}"*"${ARCHVERSION##*.}" "$SRCDIR" >> "$LOGFILE" 2>&1 || die rm -rf "$RPMTOPDIR" cp "$SRCDIR/.config" "$OBJDIR" || die if [[ "$ARCHVERSION" == *-* ]]; then echo "-${ARCHVERSION##*-}" > "$SRCDIR/localversion" || die fi echo $ARCHVERSION > "$VERSIONFILE" || die elif [[ $DISTRO = ubuntu ]] || [[ $DISTRO = debian ]]; then echo "Debian/Ubuntu distribution detected" if [[ $DISTRO = ubuntu ]]; then # url may be changed for a different mirror url="http://archive.ubuntu.com/ubuntu/pool/main/l/linux" extension="bz2" sublevel="SUBLEVEL = 0" taroptions="xvjf" elif [[ $DISTRO = debian ]]; then # url may be changed for a different mirror url="http://ftp.debian.org/debian/pool/main/l/linux" extension="xz" sublevel="SUBLEVEL =" taroptions="xvf" fi # The linux-source packages are formatted like the following for: # ubuntu: linux-source-3.13.0_3.13.0-24.46_all.deb # debian: linux-source-3.14_3.14.7-1_all.deb pkgver="${KVER}_$(dpkg-query -W -f='${Version}' linux-image-$ARCHVERSION)" pkgname="linux-source-${pkgver}_all" cd $TEMPDIR echo "Downloading the kernel source for $ARCHVERSION" # Download source deb pkg (wget "$url/${pkgname}.deb" 2>&1) >> "$LOGFILE" || die "wget: Could not fetch $url/${pkgname}.deb" # Unpack echo "Unpacking kernel source" dpkg -x ${pkgname}.deb $TEMPDIR >> "$LOGFILE" || die "dpkg: Could not extract ${pkgname}.deb" # extract and move to SRCDIR tar $taroptions usr/src/linux-source-$KVER.tar.${extension} >> "$LOGFILE" || die "tar: Failed to extract kernel source package" clean_cache mv linux-source-$KVER "$SRCDIR" || die cp "/boot/config-${ARCHVERSION}" "$OBJDIR/.config" || die if [[ "$ARCHVERSION" == *-* ]]; then echo "-${ARCHVERSION#*-}" > "$SRCDIR/localversion" || die fi # for some reason the Ubuntu kernel versions don't follow the # upstream SUBLEVEL; they are always at SUBLEVEL 0 sed -i "s/^SUBLEVEL.*/${sublevel}/" "$SRCDIR/Makefile" || die echo $ARCHVERSION > "$VERSIONFILE" || die else die "Unsupported distribution" fi fi echo "Testing patch file" cd "$SRCDIR" || die patch -N -p1 --dry-run < "$PATCHFILE" || die "source patch file failed to apply" cp "$PATCHFILE" "$APPLIEDPATCHFILE" || die cp -LR "$DATADIR/patch" "$TEMPDIR" || die export KCFLAGS="-I$DATADIR/patch -ffunction-sections -fdata-sections" echo "Reading special section data" SPECIAL_VARS=$(readelf -wi "$VMLINUX" | gawk --non-decimal-data ' BEGIN { a = b = p = e = 0 } a == 0 && /DW_AT_name.* alt_instr\s*$/ {a = 1; next} b == 0 && /DW_AT_name.* bug_entry\s*$/ {b = 1; next} p == 0 && /DW_AT_name.* paravirt_patch_site\s*$/ {p = 1; next} e == 0 && /DW_AT_name.* exception_table_entry\s*$/ {e = 1; next} a == 1 {printf("export ALT_STRUCT_SIZE=%d\n", $4); a = 2} b == 1 {printf("export BUG_STRUCT_SIZE=%d\n", $4); b = 2} p == 1 {printf("export PARA_STRUCT_SIZE=%d\n", $4); p = 2} e == 1 {printf("export EX_STRUCT_SIZE=%d\n", $4); e = 2} a == 2 && b == 2 && p == 2 && e == 2 {exit}') [[ -n $SPECIAL_VARS ]] && eval "$SPECIAL_VARS" if [[ -z $ALT_STRUCT_SIZE ]] || [[ -z $BUG_STRUCT_SIZE ]] || [[ -z $PARA_STRUCT_SIZE ]] || [[ -z $EX_STRUCT_SIZE ]]; then die "can't find special struct size" fi for i in $ALT_STRUCT_SIZE $BUG_STRUCT_SIZE $PARA_STRUCT_SIZE $EX_STRUCT_SIZE; do if [[ ! $i -gt 0 ]] || [[ ! $i -le 16 ]]; then die "invalid special struct size $i" fi done echo "Building original kernel" ./scripts/setlocalversion --save-scmversion || die make mrproper >> "$LOGFILE" 2>&1 || die CROSS_COMPILE="$TOOLSDIR/kpatch-gcc " make "-j$CPUS" $TARGETS "O=$OBJDIR" >> "$LOGFILE" 2>&1 || die echo "Building patched kernel" patch -N -p1 < "$APPLIEDPATCHFILE" >> "$LOGFILE" 2>&1 || die mkdir -p "$TEMPDIR/orig" "$TEMPDIR/patched" export TEMPDIR # TODO: remove custom LDFLAGS and ugly "undefined reference" grep when core # module gets moved to the kernel tree CROSS_COMPILE="$TOOLSDIR/kpatch-gcc " \ LDFLAGS_vmlinux="--warn-unresolved-symbols" \ KBUILD_MODPOST_WARN=1 \ make "-j$CPUS" $TARGETS "O=$OBJDIR" >> "$LOGFILE" 2>&1 || die [[ "${PIPESTATUS[0]}" -eq 0 ]] || die grep -q "undefined reference" "$LOGFILE" | grep -qv kpatch_shadow && die grep -q "undefined!" "$LOGFILE" |grep -qv kpatch_shadow && die if [[ ! -e "$TEMPDIR/changed_objs" ]]; then die "no changed objects found" fi for i in $(cat "$TEMPDIR/changed_objs") do mkdir -p "$TEMPDIR/patched/$(dirname $i)" || die cp -f "$OBJDIR/$i" "$TEMPDIR/patched/$i" || die done echo "Extracting new and modified ELF sections" FILES="$(cat "$TEMPDIR/changed_objs")" cd "$TEMPDIR" mkdir output declare -a objnames CHANGED=0 ERROR=0 for i in $FILES; do mkdir -p "output/$(dirname $i)" cd "$OBJDIR" find_kobj $i if [[ $KOBJFILE = vmlinux ]]; then KOBJFILE=$VMLINUX else KOBJFILE="$TEMPDIR/module/$KOBJFILE" fi cd $TEMPDIR debugopt= [[ $DEBUG -eq 1 ]] && debugopt=-d if [[ -e "orig/$i" ]]; then "$TOOLSDIR"/create-diff-object $debugopt "orig/$i" "patched/$i" "$KOBJFILE" "output/$i" 2>&1 |tee -a "$LOGFILE" rc="${PIPESTATUS[0]}" if [[ $rc = 139 ]]; then warn "create-diff-object SIGSEGV" if ls core* &> /dev/null; then cp core* /tmp die "core file at /tmp/$(ls core*)" fi die "no core file found, run 'ulimit -c unlimited' and try to recreate" fi # create-diff-object returns 3 if no functional change is found [[ $rc -eq 0 ]] || [[ $rc -eq 3 ]] || ERROR=$(expr $ERROR "+" 1) if [[ $rc -eq 0 ]]; then [[ -n $ERROR_IF_DIFF ]] && die $ERROR_IF_DIFF CHANGED=1 objnames[${#objnames[@]}]=$KOBJFILE fi else cp -f "patched/$i" "output/$i" objnames[${#objnames[@]}]=$KOBJFILE fi done if [[ $ERROR -ne 0 ]]; then die "$ERROR error(s) encountered" fi if [[ $CHANGED -eq 0 ]]; then die "no functional changes found" fi echo -n "Patched objects:" for i in $(echo "${objnames[@]}" | tr ' ' '\n' | sort -u | tr '\n' ' ') do echo -n " $(basename $i)" done echo export KCFLAGS="-I$DATADIR/patch" echo "Building patch module: kpatch-$PATCHNAME.ko" cp "$OBJDIR/.config" "$SRCDIR" cd "$SRCDIR" make prepare >> "$LOGFILE" 2>&1 || die cd "$TEMPDIR/output" ld -r -o ../patch/output.o $(find . -name "*.o") >> "$LOGFILE" 2>&1 || die md5sum ../patch/output.o | awk '{printf "%s\0", $1}' > checksum.tmp || die objcopy --add-section .kpatch.checksum=checksum.tmp --set-section-flags .kpatch.checksum=alloc,load,contents,readonly ../patch/output.o || die rm -f checksum.tmp cd "$TEMPDIR/patch" KPATCH_BUILD="$SRCDIR" KPATCH_NAME="$PATCHNAME" KBUILD_EXTRA_SYMBOLS="$SYMVERSFILE" make "O=$OBJDIR" >> "$LOGFILE" 2>&1 || die cp -f "$TEMPDIR/patch/kpatch-$PATCHNAME.ko" "$BASE" || die [[ "$DEBUG" -eq 0 ]] && rm -f "$LOGFILE" echo "SUCCESS" kpatch-0.3.2/kpatch-build/kpatch-gcc000077500000000000000000000024301266116401600172710ustar00rootroot00000000000000#!/bin/bash set -x TOOLCHAINCMD="$1" shift if [[ -z "$TEMPDIR" ]]; then exec "$TOOLCHAINCMD" "$@" fi declare -a args=("$@") if [[ "$TOOLCHAINCMD" = "gcc" ]] ; then while [ "$#" -gt 0 ]; do if [ "$1" = "-o" ]; then obj=$2 [[ $2 = */.tmp_*.o ]] && obj=${2/.tmp_/} case "$obj" in *.mod.o|\ *built-in.o|\ vmlinux.o|\ .tmp_kallsyms1.o|\ .tmp_kallsyms2.o|\ init/version.o|\ arch/x86/boot/version.o|\ arch/x86/boot/compressed/eboot.o|\ arch/x86/boot/header.o|\ arch/x86/boot/compressed/efi_stub_64.o|\ arch/x86/boot/compressed/piggy.o|\ kernel/system_certificates.o|\ arch/x86/vdso/*|\ arch/x86/entry/vdso/*|\ drivers/firmware/efi/libstub/*|\ .*.o) break ;; *.o) mkdir -p "$TEMPDIR/orig/$(dirname $obj)" [[ -e $obj ]] && cp -f "$obj" "$TEMPDIR/orig/$obj" echo "$obj" >> "$TEMPDIR/changed_objs" break ;; *) break ;; esac fi shift done elif [[ "$TOOLCHAINCMD" = "ld" ]] ; then while [ "$#" -gt 0 ]; do if [ "$1" = "-o" ]; then obj=$2 case "$obj" in *.ko) mkdir -p "$TEMPDIR/module/$(dirname $obj)" cp -f "$obj" "$TEMPDIR/module/$obj" break ;; *) break ;; esac fi shift done fi exec "$TOOLCHAINCMD" "${args[@]}" kpatch-0.3.2/kpatch-build/list.h000066400000000000000000000150171266116401600164700ustar00rootroot00000000000000/* * list.h * * Adapted from http://www.mcs.anl.gov/~kazutomo/list/list.h which is a * userspace port of the Linux kernel implementation in include/linux/list.h * * Thus licensed as GPLv2. * * Copyright (C) 2014 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #ifndef _LIST_H #define _LIST_H /** * Get offset of a member */ #define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) /** * Casts a member of a structure out to the containing structure * @param ptr the pointer to the member. * @param type the type of the container struct this is embedded in. * @param member the name of the member within the struct. * */ #define container_of(ptr, type, member) ({ \ const typeof( ((type *)0)->member ) *__mptr = (ptr); \ (type *)( (char *)__mptr - offsetof(type,member) );}) /* * These are non-NULL pointers that will result in page faults * under normal circumstances, used to verify that nobody uses * non-initialized list entries. */ #define LIST_POISON1 ((void *) 0x00100100) #define LIST_POISON2 ((void *) 0x00200200) /** * Simple doubly linked list implementation. * * Some of the internal functions ("__xxx") are useful when * manipulating whole lists rather than single entries, as * sometimes we already know the next/prev entries and we can * generate better code by using them directly rather than * using the generic single-entry routines. */ struct list_head { struct list_head *next, *prev; }; #define LIST_HEAD_INIT(name) { &(name), &(name) } #define LIST_HEAD(name) \ struct list_head name = LIST_HEAD_INIT(name) #define INIT_LIST_HEAD(ptr) do { \ (ptr)->next = (ptr); (ptr)->prev = (ptr); \ } while (0) /* * Insert a new entry between two known consecutive entries. * * This is only for internal list manipulation where we know * the prev/next entries already! */ static inline void __list_add(struct list_head *new, struct list_head *prev, struct list_head *next) { next->prev = new; new->next = next; new->prev = prev; prev->next = new; } /** * list_add - add a new entry * @new: new entry to be added * @head: list head to add it after * * Insert a new entry after the specified head. * This is good for implementing stacks. */ static inline void list_add(struct list_head *new, struct list_head *head) { __list_add(new, head, head->next); } /** * list_add_tail - add a new entry * @new: new entry to be added * @head: list head to add it before * * Insert a new entry before the specified head. * This is useful for implementing queues. */ static inline void list_add_tail(struct list_head *new, struct list_head *head) { __list_add(new, head->prev, head); } /* * Delete a list entry by making the prev/next entries * point to each other. * * This is only for internal list manipulation where we know * the prev/next entries already! */ static inline void __list_del(struct list_head * prev, struct list_head * next) { next->prev = prev; prev->next = next; } /** * list_del - deletes entry from list. * @entry: the element to delete from the list. * Note: list_empty on entry does not return true after this, the entry is * in an undefined state. */ static inline void list_del(struct list_head *entry) { __list_del(entry->prev, entry->next); entry->next = LIST_POISON1; entry->prev = LIST_POISON2; } /** * list_replace - replace old entry by new one * @old : the element to be replaced * @new : the new element to insert * * If @old was empty, it will be overwritten. */ static inline void list_replace(struct list_head *old, struct list_head *new) { new->next = old->next; new->next->prev = new; new->prev = old->prev; new->prev->next = new; } #define list_entry(ptr, type, member) \ container_of(ptr, type, member) /** * list_first_entry - get the first element from a list * @ptr: the list head to take the element from. * @type: the type of the struct this is embedded in. * @member: the name of the list_struct within the struct. * * Note, that list is expected to be not empty. */ #define list_first_entry(ptr, type, member) \ list_entry((ptr)->next, type, member) /** * list_next_entry - get the next element in list * @pos: the type * to cursor * @member: the name of the list_struct within the struct. */ #define list_next_entry(pos, member) \ list_entry((pos)->member.next, typeof(*(pos)), member) /** * list_for_each_entry - iterate over list of given type * @pos: the type * to use as a loop counter. * @head: the head for your list. * @member: the name of the list_struct within the struct. */ #define list_for_each_entry(pos, head, member) \ for (pos = list_entry((head)->next, typeof(*pos), member); \ &pos->member != (head); \ pos = list_entry(pos->member.next, typeof(*pos), member)) /** * list_for_each_entry_continue - continue iteration over list of given type * @pos: the type * to use as a loop cursor. * @head: the head for your list. * @member: the name of the list_struct within the struct. * * Continue to iterate over list of given type, continuing after * the current position. */ #define list_for_each_entry_continue(pos, head, member) \ for (pos = list_next_entry(pos, member); \ &pos->member != (head); \ pos = list_next_entry(pos, member)) /** * list_for_each_entry_safe - iterate over list of given type safe against removal of list entry * @pos: the type * to use as a loop counter. * @n: another type * to use as temporary storage * @head: the head for your list. * @member: the name of the list_struct within the struct. */ #define list_for_each_entry_safe(pos, n, head, member) \ for (pos = list_entry((head)->next, typeof(*pos), member), \ n = list_entry(pos->member.next, typeof(*pos), member); \ &pos->member != (head); \ pos = n, n = list_entry(n->member.next, typeof(*n), member)) #endif /* _LIST_H_ */ kpatch-0.3.2/kpatch-build/lookup.c000066400000000000000000000135771266116401600170320ustar00rootroot00000000000000/* * lookup.c * * This file contains functions that assist in the reading and searching * the symbol table of an ELF object. * * Copyright (C) 2014 Seth Jennings * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, * 02110-1301, USA. */ #include #include #include #include #include #include #include #include #include #include "lookup.h" #define ERROR(format, ...) \ error(1, 0, "%s: %d: " format, __FUNCTION__, __LINE__, ##__VA_ARGS__) struct symbol { unsigned long value; unsigned long size; char *name; int type, bind, skip; }; struct lookup_table { int fd, nr; Elf *elf; struct symbol *syms; }; #define for_each_symbol(ndx, iter, table) \ for (ndx = 0, iter = table->syms; ndx < table->nr; ndx++, iter++) struct lookup_table *lookup_open(char *path) { Elf *elf; int fd, i, len; Elf_Scn *scn; GElf_Shdr sh; GElf_Sym sym; Elf_Data *data; char *name; struct lookup_table *table; struct symbol *mysym; size_t shstrndx; if ((fd = open(path, O_RDONLY, 0)) < 0) ERROR("open"); elf_version(EV_CURRENT); elf = elf_begin(fd, ELF_C_READ_MMAP, NULL); if (!elf) { printf("%s\n", elf_errmsg(-1)); ERROR("elf_begin"); } if (elf_getshdrstrndx(elf, &shstrndx)) ERROR("elf_getshdrstrndx"); scn = NULL; while ((scn = elf_nextscn(elf, scn))) { if (!gelf_getshdr(scn, &sh)) ERROR("gelf_getshdr"); name = elf_strptr(elf, shstrndx, sh.sh_name); if (!name) ERROR("elf_strptr scn"); if (!strcmp(name, ".symtab")) break; } if (!scn) ERROR(".symtab section not found"); data = elf_getdata(scn, NULL); if (!data) ERROR("elf_getdata"); len = sh.sh_size / sh.sh_entsize; table = malloc(sizeof(*table)); if (!table) ERROR("malloc table"); table->syms = malloc(len * sizeof(struct symbol)); if (!table->syms) ERROR("malloc table.syms"); memset(table->syms, 0, len * sizeof(struct symbol)); table->nr = len; table->fd = fd; table->elf = elf; for_each_symbol(i, mysym, table) { if (!gelf_getsym(data, i, &sym)) ERROR("gelf_getsym"); if (sym.st_shndx == SHN_UNDEF) { mysym->skip = 1; continue; } name = elf_strptr(elf, sh.sh_link, sym.st_name); if(!name) ERROR("elf_strptr sym"); mysym->value = sym.st_value; mysym->size = sym.st_size; mysym->type = GELF_ST_TYPE(sym.st_info); mysym->bind = GELF_ST_BIND(sym.st_info); mysym->name = name; } return table; } void lookup_close(struct lookup_table *table) { elf_end(table->elf); close(table->fd); free(table); } int lookup_local_symbol(struct lookup_table *table, char *name, char *hint, struct lookup_result *result) { struct symbol *sym, *match = NULL; int i; unsigned long pos = 0; char *curfile = NULL; memset(result, 0, sizeof(*result)); for_each_symbol(i, sym, table) { if (sym->type == STT_FILE) { if (!strcmp(sym->name, hint)) { curfile = sym->name; continue; /* begin hint file symbols */ } else if (curfile) curfile = NULL; /* end hint file symbols */ } if (sym->bind == STB_LOCAL) { if (sym->name && !strcmp(sym->name, name)) { /* * need to count any occurrence of the symbol * name, unless we've already found a match */ if (!match) pos++; if (!curfile) continue; if (match) /* dup file+symbol, unresolvable ambiguity */ return 1; match = sym; } } } if (!match) return 1; result->pos = pos; result->value = match->value; result->size = match->size; return 0; } int lookup_global_symbol(struct lookup_table *table, char *name, struct lookup_result *result) { struct symbol *sym; int i; memset(result, 0, sizeof(*result)); for_each_symbol(i, sym, table) if (!sym->skip && (sym->bind == STB_GLOBAL || sym->bind == STB_WEAK) && !strcmp(sym->name, name)) { result->value = sym->value; result->size = sym->size; result->pos = 0; /* always 0 for global symbols */ return 0; } return 1; } int lookup_is_exported_symbol(struct lookup_table *table, char *name) { struct symbol *sym; int i; char export[255] = "__ksymtab_"; strncat(export, name, 254); for_each_symbol(i, sym, table) if (!sym->skip && !strcmp(sym->name, export)) return 1; return 0; } #if 0 /* for local testing */ static void find_this(struct lookup_table *table, char *sym, char *hint) { struct lookup_result result; if (hint) lookup_local_symbol(table, sym, hint, &result); else lookup_global_symbol(table, sym, &result); printf("%s %s w/ %s hint at 0x%016lx len %lu pos %lu\n", hint ? "local" : "global", sym, hint ? hint : "no", result.value, result.size, result.pos); } int main(int argc, char **argv) { struct lookup_table *vmlinux; if (argc != 2) return 1; vmlinux = lookup_open(argv[1]); printf("printk is%s exported\n", lookup_is_exported_symbol(vmlinux, "__fentry__") ? "" : " not"); printf("meminfo_proc_show is%s exported\n", lookup_is_exported_symbol(vmlinux, "meminfo_proc_show") ? "" : " not"); find_this(vmlinux, "printk", NULL); find_this(vmlinux, "pages_to_scan_show", "ksm.c"); find_this(vmlinux, "pages_to_scan_show", "huge_memory.c"); find_this(vmlinux, "pages_to_scan_show", NULL); /* should fail */ lookup_close(vmlinux); return 0; } #endif kpatch-0.3.2/kpatch-build/lookup.h000066400000000000000000000011201266116401600170140ustar00rootroot00000000000000#ifndef _LOOKUP_H_ #define _LOOKUP_H_ struct lookup_table; struct lookup_result { unsigned long value; unsigned long size; unsigned long pos; }; struct lookup_table *lookup_open(char *path); void lookup_close(struct lookup_table *table); int lookup_local_symbol(struct lookup_table *table, char *name, char *hint, struct lookup_result *result); int lookup_global_symbol(struct lookup_table *table, char *name, struct lookup_result *result); int lookup_is_exported_symbol(struct lookup_table *table, char *name); #endif /* _LOOKUP_H_ */ kpatch-0.3.2/kpatch/000077500000000000000000000000001266116401600142435ustar00rootroot00000000000000kpatch-0.3.2/kpatch/Makefile000066400000000000000000000002211266116401600156760ustar00rootroot00000000000000include ../Makefile.inc all: install: all $(INSTALL) -d $(SBINDIR) $(INSTALL) kpatch $(SBINDIR) uninstall: $(RM) $(SBINDIR)/kpatch clean: kpatch-0.3.2/kpatch/kpatch000077500000000000000000000203571266116401600154520ustar00rootroot00000000000000#!/bin/bash # # kpatch hot patch module management script # # Copyright (C) 2014 Seth Jennings # Copyright (C) 2014 Josh Poimboeuf # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, # 02110-1301, USA. # This is the kpatch user script that manages installing, loading, and # displaying information about kernel patch modules installed on the system. INSTALLDIR=/var/lib/kpatch SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" VERSION="0.3.2" usage_cmd() { printf ' %-20s\n %s\n' "$1" "$2" >&2 } usage () { # ATTENTION ATTENTION ATTENTION ATTENTION ATTENTION ATTENTION # When changing this, please also update the man page. Thanks! echo "usage: kpatch []" >&2 echo >&2 echo "Valid commands:" >&2 usage_cmd "install [-k|--kernel-version=] " "install patch module to the initrd to be loaded at boot" usage_cmd "uninstall [-k|--kernel-version=] " "uninstall patch module from the initrd" echo >&2 usage_cmd "load --all" "load all installed patch modules into the running kernel" usage_cmd "load " "load patch module into the running kernel" usage_cmd "unload --all" "unload all patch modules from the running kernel" usage_cmd "unload " "unload patch module from the running kernel" echo >&2 usage_cmd "info " "show information about a patch module" echo >&2 usage_cmd "list" "list installed patch modules" echo >&2 usage_cmd "version" "display the kpatch version" exit 1 } warn() { echo "kpatch: $@" >&2 } die() { warn "$@" exit 1 } __find_module () { MODULE="$1" [[ -f "$MODULE" ]] && return MODULE=$INSTALLDIR/$(uname -r)/"$1" [[ -f "$MODULE" ]] && return return 1 } mod_name () { MODNAME="$(basename $1)" MODNAME="${MODNAME%.ko}" MODNAME="${MODNAME//-/_}" } find_module () { arg="$1" if [[ "$arg" =~ \.ko ]]; then __find_module "$arg" || return 1 mod_name "$MODULE" return else for i in $INSTALLDIR/$(uname -r)/*; do mod_name "$i" if [[ $MODNAME == $arg ]]; then MODULE="$i" return fi done fi return 1 } find_core_module() { COREMOD="$SCRIPTDIR"/../kmod/core/kpatch.ko [[ -f "$COREMOD" ]] && return COREMOD="/usr/local/lib/kpatch/$(uname -r)/kpatch.ko" [[ -f "$COREMOD" ]] && return COREMOD="/usr/lib/kpatch/$(uname -r)/kpatch.ko" [[ -f "$COREMOD" ]] && return COREMOD="/usr/local/lib/modules/$(uname -r)/extra/kpatch/kpatch.ko" [[ -f "$COREMOD" ]] && return COREMOD="/usr/lib/modules/$(uname -r)/extra/kpatch/kpatch.ko" [[ -f "$COREMOD" ]] && return return 1 } core_module_loaded () { grep -q "T kpatch_register" /proc/kallsyms } get_module_name () { echo $(readelf -p .gnu.linkonce.this_module $1 | grep '\[.*\]' | awk '{print $3}') } verify_module_checksum () { modname=$(get_module_name $1) [[ -z $modname ]] && return 1 checksum=$(readelf -p .kpatch.checksum $1 | grep '\[.*\]' | awk '{print $3}') [[ -z $checksum ]] && return 1 sysfs_checksum=$(cat /sys/kernel/kpatch/patches/${modname}/checksum) [[ $checksum == $sysfs_checksum ]] || return 1 } load_module () { if ! core_module_loaded; then if modprobe -q kpatch; then echo "loaded core module" else find_core_module || die "can't find core module" echo "loading core module: $COREMOD" insmod "$COREMOD" || die "failed to load core module" fi fi modname=$(get_module_name $1) moddir=/sys/kernel/kpatch/patches/$modname if [[ -d $moddir ]] ; then if [[ $(cat "${moddir}/enabled") -eq 0 ]]; then if verify_module_checksum $1; then # same checksum echo "module already loaded, re-enabling" echo 1 > ${moddir}/enabled || die "failed to re-enable module $modname" return else die "error: cannot re-enable patch module $modname, cannot verify checksum match" fi else die "error: module named $modname already loaded and enabled" fi fi echo "loading patch module: $1" insmod "$1" "$2" } unload_module () { PATCH="${1//-/_}" PATCH="${PATCH%.ko}" ENABLED=/sys/kernel/kpatch/patches/"$PATCH"/enabled [[ -e "$ENABLED" ]] || die "patch module $1 is not loaded" if [[ $(cat "$ENABLED") -eq 1 ]]; then echo "disabling patch module: $PATCH" echo 0 > $ENABLED || die "can't disable $PATCH" fi echo "unloading patch module: $PATCH" # ignore any error here because rmmod can fail if the module used # KPATCH_FORCE_UNSAFE. rmmod $PATCH 2> /dev/null || return 0 } get_module_version() { MODVER=$(modinfo -F vermagic "$1") || return 1 MODVER=${MODVER/ */} } unset MODULE [[ "$#" -lt 1 ]] && usage case "$1" in "load") [[ "$#" -ne 2 ]] && usage case "$2" in "--all") for i in "$INSTALLDIR"/$(uname -r)/*.ko; do [[ -e "$i" ]] || continue load_module "$i" || die "failed to load module $i" done ;; *) PATCH="$2" find_module "$PATCH" || die "can't find $PATCH" load_module "$MODULE" || die "failed to load module $PATCH" ;; esac ;; "unload") [[ "$#" -ne 2 ]] && usage case "$2" in "--all") for module in /sys/kernel/kpatch/patches/*; do [[ -e $module ]] || continue unload_module $(basename $module) || die "failed to unload module $module" done ;; *) unload_module "$(basename $2)" || die "failed to unload module $2" ;; esac ;; "install") KVER=$(uname -r) shift options=$(getopt -o k: -l "kernel-version:" -- "$@") || die "getopt failed" eval set -- "$options" while [[ $# -gt 0 ]]; do case "$1" in -k|--kernel-version) KVER=$2 shift ;; --) [[ -z "$2" ]] && die "no module file specified" PATCH="$2" ;; esac shift done [[ ! -e "$PATCH" ]] && die "$PATCH doesn't exist" [[ ${PATCH: -3} == ".ko" ]] || die "$PATCH isn't a .ko file" get_module_version "$PATCH" || die "modinfo failed" [[ $KVER != $MODVER ]] && die "invalid module version $MODVER for kernel $KVER" [[ -e $INSTALLDIR/$KVER/$(basename "$PATCH") ]] && die "$PATCH is already installed" echo "installing $PATCH ($KVER)" mkdir -p $INSTALLDIR/$KVER || die "failed to create install directory" cp -f "$PATCH" $INSTALLDIR/$KVER || die "failed to install module $PATCH" systemctl enable kpatch.service ;; "uninstall") KVER=$(uname -r) shift options=$(getopt -o k: -l "kernel-version:" -- "$@") || die "getopt failed" eval set -- "$options" while [[ $# -gt 0 ]]; do case "$1" in -k|--kernel-version) KVER=$2 shift ;; --) [[ -z "$2" ]] && die "no module file specified" PATCH="$2" [[ "$PATCH" != $(basename "$PATCH") ]] && die "please supply patch module name without path" ;; esac shift done MODULE=$INSTALLDIR/$KVER/"$PATCH" if [[ ! -f "$MODULE" ]]; then mod_name "$PATCH" PATCHNAME=$MODNAME for i in $INSTALLDIR/$KVER/*; do mod_name "$i" if [[ $MODNAME == $PATCHNAME ]]; then MODULE="$i" break fi done fi [[ ! -e $MODULE ]] && die "$PATCH is not installed for kernel $KVER" echo "uninstalling $PATCH ($KVER)" rm -f $MODULE || die "failed to uninstall module $PATCH" ;; "list") [[ "$#" -ne 1 ]] && usage echo "Loaded patch modules:" for module in /sys/kernel/kpatch/patches/*; do if [[ -e $module ]] && [[ $(cat $module/enabled) -eq 1 ]]; then echo $(basename "$module") fi done echo "" echo "Installed patch modules:" for kdir in $INSTALLDIR/*; do [[ -e "$kdir" ]] || continue for module in $kdir/*; do [[ -e "$module" ]] || continue mod_name "$module" echo "$MODNAME ($(basename $kdir))" done done ;; "info") [[ "$#" -ne 2 ]] && usage PATCH="$2" find_module "$PATCH" || die "can't find $PATCH" echo "Patch information for $PATCH:" modinfo "$MODULE" || die "failed to get info for module $PATCH" ;; "help"|"-h"|"--help") usage ;; "version"|"-v"|"--version") echo "$VERSION" ;; *) echo "subcommand $1 not recognized" usage ;; esac kpatch-0.3.2/man/000077500000000000000000000000001266116401600135445ustar00rootroot00000000000000kpatch-0.3.2/man/Makefile000066400000000000000000000006311266116401600152040ustar00rootroot00000000000000include ../Makefile.inc all: kpatch.1.gz kpatch-build.1.gz kpatch.1.gz: kpatch.1 gzip -c -9 $< > $@ kpatch-build.1.gz: kpatch-build.1 gzip -c -9 $< > $@ install: all $(INSTALL) -d $(MANDIR) $(INSTALL) -m 644 kpatch.1.gz $(MANDIR) $(INSTALL) -m 644 kpatch-build.1.gz $(MANDIR) uninstall: $(RM) $(MANDIR)/kpatch.1* $(RM) $(MANDIR)/kpatch-build.1* clean: $(RM) kpatch.1.gz $(RM) kpatch-build.1.gz kpatch-0.3.2/man/kpatch-build.1000066400000000000000000000030231266116401600161730ustar00rootroot00000000000000.\" Manpage for kpatch-build. .\" Contact udoseidel@gmx.de to correct errors or typos. .TH man 1 "23 Mar 2014" "1.0" "kpatch-build man page" .SH NAME kpatch-build \- build script .SH SYNOPSIS kpatch-build [options] .SH DESCRIPTION This script takes a patch based on the version of the kernel currently running and creates a kernel module that will replace modified functions in the kernel such that the patched code takes effect. This script currently only works on Fedora and will need to be adapted to work on other distros. .SH OPTIONS -h|--help Show this help message -r|--sourcerpm Specify kernel source RPM -s|--sourcedir Specify kernel source directory -c|--config Specify kernel config file -v|--vmlinux Specify original vmlinux -t|--target Specify custom kernel build targets -d|--debug Keep scratch files in /tmp --skip-gcc-check Skips check that ensures that the system gcc version and the gcc version that built the kernel match. Skipping this check is not recommended, but is useful if the exact gcc version is not available or is not easily installed. Use only when confident that the two versions of gcc output identical objects for a given target. Otherwise, use of this option might result in unexpected changed objects being detected. .SH SEE ALSO kpatch(1) .SH BUGS No known bugs. .SH AUTHOR Udo Seidel (udoseidel@gmx.de) .SH COPYRIGHT Copyright (C) 2014: Seth Jennings , Copyright (C) 2013,2014: Josh Poimboeuf kpatch-0.3.2/man/kpatch.1000066400000000000000000000023271266116401600151040ustar00rootroot00000000000000.\" Manpage for kpatch. .\" Contact udoseidel@gmx.de to correct errors or typos. .TH man 1 "23 Mar 2014" "1.0" "kpatch man page" .SH NAME kpatch \- hot patch module management .SH SYNOPSIS kpatch [] .SH DESCRIPTION kpatch is a user script that manages installing, loading, and displaying information about kernel patch modules installed on the system. .SH COMMANDS install [-k|--kernel-version=] install patch module to the initrd to be loaded at boot uninstall [-k|--kernel-version=] uninstall patch module from the initrd load --all load all installed patch modules into the running kernel load load patch module into the running kernel unload --all unload all patch modules from the running kernel unload unload patch module from the running kernel info show information about a patch module list list installed patch modules version display the kpatch version .SH SEE ALSO kpatch-build(1) .SH BUGS No known bugs. .SH AUTHOR Udo Seidel (udoseidel@gmx.de) .SH COPYRIGHT Copyright (C) 2014: Seth Jennings and Josh Poimboeuf kpatch-0.3.2/test/000077500000000000000000000000001266116401600137505ustar00rootroot00000000000000kpatch-0.3.2/test/difftree.sh000077500000000000000000000052061266116401600161020ustar00rootroot00000000000000#!/bin/bash # The purpose of this test script is to determine if create-diff-object can # properly recognize object file equivalence when passed the same file for both # the original and patched objects. This verifies that create-diff-object is # correctly parsing, correlating, and comparing the different elements of the # object file. In practice, a situation similar to the test case occurs when a # commonly included header file changes, causing Make to rebuild many objects # that have no functional change. # This script requires a built kernel object tree to be in the kpatch cache # directory at $HOME/.kpatch/obj #set -x OBJDIR="$HOME/.kpatch/obj" SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" TEMPDIR=$(mktemp -d) RESULTSDIR="$TEMPDIR/results" VMVLINUX="/usr/lib/debug/lib/modules/$(uname -r)/vmlinux" # path for F20 if [[ ! -d $OBJDIR ]]; then echo "please run kpatch-build to populate the object tree in $OBJDIR" fi cd "$OBJDIR" || exit 1 for i in $(find * -name '*.o') do # copied from kpatch-build/kpatch-gcc; keep in sync case $i in *.mod.o|\ *built-in.o|\ vmlinux.o|\ .tmp_kallsyms1.o|\ .tmp_kallsyms2.o|\ init/version.o|\ arch/x86/boot/version.o|\ arch/x86/boot/compressed/eboot.o|\ arch/x86/boot/header.o|\ arch/x86/boot/compressed/efi_stub_64.o|\ arch/x86/boot/compressed/piggy.o|\ kernel/system_certificates.o|\ .*.o) continue ;; esac # skip objects that are the linked product of more than one object file [[ $(eu-readelf -s $i | grep FILE | wc -l) -ne 1 ]] && continue $SCRIPTDIR/../kpatch-build/create-diff-object $i $i /usr/lib/debug/lib/modules/$(uname -r)/vmlinux "$TEMPDIR/output.o" > "$TEMPDIR/log.txt" 2>&1 RETCODE=$? # expect RETCODE to be 3 indicating no change [[ $RETCODE -eq 3 ]] && continue # otherwise record error mkdir -p $RESULTSDIR/$(dirname $i) || exit 1 cp "$i" "$RESULTSDIR/$i" || exit 1 case $RETCODE in 139) echo "$i: segfault" | tee if [[ ! -e core ]]; then echo "no corefile, run "ulimit -c unlimited" to capture corefile" else mv core "$RESULTSDIR/$i.core" || exit 1 fi ;; 0) echo "$i: incorrectly detected change" mv "$TEMPDIR/log.txt" "$RESULTSDIR/$i.log" || exit 1 ;; 1|2) echo "$i: error code $RETCODE" mv "$TEMPDIR/log.txt" "$RESULTSDIR/$i.log" || exit 1 ;; *) exit 1 # script error ;; esac done rm -f "$TEMPDIR/log.txt" > /dev/null 2>&1 # try to group the errors together in some meaningful way cd "$RESULTSDIR" || exit 1 echo "" echo "Results:" for i in $(find * -iname '*.log') do echo $(cat $i | head -1 | cut -f2-3 -d':') done | sort | uniq -c | sort -n -r | tee "$TEMPDIR/results.log" echo "results are in $TEMPDIR" kpatch-0.3.2/test/integration/000077500000000000000000000000001266116401600162735ustar00rootroot00000000000000kpatch-0.3.2/test/integration/.gitignore000066400000000000000000000000301266116401600202540ustar00rootroot00000000000000test.log COMBINED.patch kpatch-0.3.2/test/integration/f22/000077500000000000000000000000001266116401600166645ustar00rootroot00000000000000kpatch-0.3.2/test/integration/f22/Makefile000066400000000000000000000015661266116401600203340ustar00rootroot00000000000000all: $(error please specify local or remote) local: slow remote: remote_slow slow: clean ../kpatch-test quick: clean ../kpatch-test --quick cached: ../kpatch-test --cached clean: rm -f *.ko *.log COMBINED.patch check_host: ifndef SSH_HOST $(error SSH_HOST is undefined) endif SSH_USER ?= root remote_setup: check_host ssh $(SSH_USER)@$(SSH_HOST) exit ssh $(SSH_USER)@$(SSH_HOST) "ls kpatch-setup &> /dev/null" || \ (scp remote-setup $(SSH_USER)@$(SSH_HOST):kpatch-setup && \ ssh $(SSH_USER)@$(SSH_HOST) "./kpatch-setup") remote_sync: remote_setup ssh $(SSH_USER)@$(SSH_HOST) "rm -rf kpatch-test" rsync -Cavz --include=core $(shell readlink -f ../../..) $(SSH_USER)@$(SSH_HOST):kpatch-test ssh $(SSH_USER)@$(SSH_HOST) "cd kpatch-test/kpatch && make" remote_slow: remote_sync ssh $(SSH_USER)@$(SSH_HOST) "cd kpatch-test/kpatch/test/integration/f22 && make slow" kpatch-0.3.2/test/integration/f22/README000066400000000000000000000000261266116401600175420ustar00rootroot000000000000004.2.3-200.fc22.x86_64 kpatch-0.3.2/test/integration/f22/bug-table-section.patch000066400000000000000000000007221266116401600232120ustar00rootroot00000000000000Index: src/fs/proc/proc_sysctl.c =================================================================== --- src.orig/fs/proc/proc_sysctl.c +++ src/fs/proc/proc_sysctl.c @@ -266,6 +266,8 @@ void sysctl_head_put(struct ctl_table_he static struct ctl_table_header *sysctl_head_grab(struct ctl_table_header *head) { + if (jiffies == 0) + printk("kpatch-test: testing __bug_table section changes\n"); BUG_ON(!head); spin_lock(&sysctl_lock); if (!use_table(head)) kpatch-0.3.2/test/integration/f22/cmdline-string-LOADED.test000077500000000000000000000000511266116401600234310ustar00rootroot00000000000000#!/bin/bash grep kpatch=1 /proc/cmdline kpatch-0.3.2/test/integration/f22/cmdline-string.patch000066400000000000000000000005361266116401600226300ustar00rootroot00000000000000Index: src/fs/proc/cmdline.c =================================================================== --- src.orig/fs/proc/cmdline.c +++ src/fs/proc/cmdline.c @@ -5,7 +5,7 @@ static int cmdline_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "%s\n", saved_command_line); + seq_printf(m, "%s kpatch=1\n", saved_command_line); return 0; } kpatch-0.3.2/test/integration/f22/data-new-LOADED.test000077500000000000000000000000541266116401600222150ustar00rootroot00000000000000#!/bin/bash grep "kpatch: 5" /proc/meminfo kpatch-0.3.2/test/integration/f22/data-new.patch000066400000000000000000000013051266116401600214040ustar00rootroot00000000000000Index: src/fs/proc/meminfo.c =================================================================== --- src.orig/fs/proc/meminfo.c +++ src/fs/proc/meminfo.c @@ -20,6 +20,8 @@ void __attribute__((weak)) arch_report_m { } +static int foo = 5; + static int meminfo_proc_show(struct seq_file *m, void *v) { struct sysinfo i; @@ -138,6 +140,7 @@ static int meminfo_proc_show(struct seq_ #ifdef CONFIG_TRANSPARENT_HUGEPAGE "AnonHugePages: %8lu kB\n" #endif + "kpatch: %d" , K(i.totalram), K(i.freeram), @@ -193,6 +196,7 @@ static int meminfo_proc_show(struct seq_ ,K(global_page_state(NR_ANON_TRANSPARENT_HUGEPAGES) * HPAGE_PMD_NR) #endif + ,foo ); hugetlb_report_meminfo(m); kpatch-0.3.2/test/integration/f22/data-read-mostly.patch000066400000000000000000000004071266116401600230550ustar00rootroot00000000000000Index: src/net/core/dev.c =================================================================== --- src.orig/net/core/dev.c +++ src/net/core/dev.c @@ -3609,6 +3609,7 @@ ncls: case RX_HANDLER_PASS: break; default: + printk("BUG!\n"); BUG(); } } kpatch-0.3.2/test/integration/f22/fixup-section.patch000066400000000000000000000006751266116401600225120ustar00rootroot00000000000000Index: src/fs/readdir.c =================================================================== --- src.orig/fs/readdir.c +++ src/fs/readdir.c @@ -169,6 +169,8 @@ static int filldir(void * __buf, const c goto efault; } dirent = buf->current_dir; + if (dirent->d_ino == 12345678) + printk("kpatch-test: testing .fixup section changes\n"); if (__put_user(d_ino, &dirent->d_ino)) goto efault; if (__put_user(reclen, &dirent->d_reclen)) kpatch-0.3.2/test/integration/f22/gcc-constprop.patch000066400000000000000000000007371266116401600224750ustar00rootroot00000000000000ensure timekeeping_forward_now.constprop.8 and timekeeping_forward_now.constprop.9 are correlated. Index: src/kernel/time/timekeeping.c =================================================================== --- src.orig/kernel/time/timekeeping.c +++ src/kernel/time/timekeeping.c @@ -781,6 +781,9 @@ void do_gettimeofday(struct timeval *tv) { struct timespec64 now; + if (!tv) + return; + getnstimeofday64(&now); tv->tv_sec = now.tv_sec; tv->tv_usec = now.tv_nsec/1000; kpatch-0.3.2/test/integration/f22/gcc-isra.patch000066400000000000000000000006051266116401600213760ustar00rootroot00000000000000Index: src/fs/proc/proc_sysctl.c =================================================================== --- src.orig/fs/proc/proc_sysctl.c +++ src/fs/proc/proc_sysctl.c @@ -24,6 +24,7 @@ void proc_sys_poll_notify(struct ctl_tab if (!poll) return; + printk("kpatch-test: testing gcc .isra function name mangling\n"); atomic_inc(&poll->event); wake_up_interruptible(&poll->wait); } kpatch-0.3.2/test/integration/f22/gcc-mangled-3.patch000066400000000000000000000007461266116401600222150ustar00rootroot00000000000000ensure that __cmpxchg_double_slab.isra.45 and __cmpxchg_double_slab.isra.45.part.46 aren't correlated. Index: src/mm/slub.c =================================================================== --- src.orig/mm/slub.c +++ src/mm/slub.c @@ -5320,6 +5320,9 @@ void get_slabinfo(struct kmem_cache *s, int node; struct kmem_cache_node *n; + if (!jiffies) + printk("slabinfo\n"); + for_each_kmem_cache_node(s, node, n) { nr_slabs += node_nr_slabs(n); nr_objs += node_nr_objs(n); kpatch-0.3.2/test/integration/f22/gcc-static-local-var-2.patch000066400000000000000000000013351266116401600237450ustar00rootroot00000000000000Index: src/mm/mmap.c =================================================================== --- src.orig/mm/mmap.c +++ src/mm/mmap.c @@ -1493,6 +1493,7 @@ static inline int accountable_mapping(st return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE; } +#include "kpatch-macros.h" unsigned long mmap_region(struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned long pgoff) { @@ -1502,6 +1503,9 @@ unsigned long mmap_region(struct file *f struct rb_node **rb_link, *rb_parent; unsigned long charged = 0; + if (!jiffies) + printk("kpatch mmap foo\n"); + /* Check against address space limit. */ if (!may_expand_vm(mm, len >> PAGE_SHIFT)) { unsigned long nr_pages; kpatch-0.3.2/test/integration/f22/gcc-static-local-var-3.patch000066400000000000000000000006241266116401600237460ustar00rootroot00000000000000Index: src/kernel/reboot.c =================================================================== --- src.orig/kernel/reboot.c +++ src/kernel/reboot.c @@ -285,8 +285,15 @@ SYSCALL_DEFINE4(reboot, int, magic1, int return ret; } +void kpatch_bar(void) +{ + if (!jiffies) + printk("kpatch_foo\n"); +} + static void deferred_cad(struct work_struct *dummy) { + kpatch_bar(); kernel_restart(NULL); } kpatch-0.3.2/test/integration/f22/gcc-static-local-var-4.patch000066400000000000000000000007531266116401600237520ustar00rootroot00000000000000Index: src/fs/aio.c =================================================================== --- src.orig/fs/aio.c +++ src/fs/aio.c @@ -229,9 +229,16 @@ static int __init aio_setup(void) } __initcall(aio_setup); +void kpatch_aio_foo(void) +{ + if (!jiffies) + printk("kpatch aio foo\n"); +} + static void put_aio_ring_file(struct kioctx *ctx) { struct file *aio_ring_file = ctx->aio_ring_file; + kpatch_aio_foo(); if (aio_ring_file) { truncate_setsize(aio_ring_file->f_inode, 0); kpatch-0.3.2/test/integration/f22/gcc-static-local-var-4.test000077500000000000000000000001521266116401600236260ustar00rootroot00000000000000#!/bin/bash if $(nm kpatch-gcc-static-local-var-4.ko | grep -q free_ioctx); then exit 1 else exit 0 fi kpatch-0.3.2/test/integration/f22/gcc-static-local-var-5.patch000066400000000000000000000021171266116401600237470ustar00rootroot00000000000000Index: src/kernel/audit.c =================================================================== --- src.orig/kernel/audit.c +++ src/kernel/audit.c @@ -211,6 +211,12 @@ void audit_panic(const char *message) } } +void kpatch_audit_foo(void) +{ + if (!jiffies) + printk("kpatch audit foo\n"); +} + static inline int audit_rate_check(void) { static unsigned long last_check = 0; @@ -221,6 +227,7 @@ static inline int audit_rate_check(void) unsigned long elapsed; int retval = 0; + kpatch_audit_foo(); if (!audit_rate_limit) return 1; spin_lock_irqsave(&lock, flags); @@ -240,6 +247,11 @@ static inline int audit_rate_check(void) return retval; } +noinline void kpatch_audit_check(void) +{ + audit_rate_check(); +} + /** * audit_log_lost - conditionally log lost audit message event * @message: the message stating reason for lost audit message @@ -286,6 +298,8 @@ static int audit_log_config_change(char struct audit_buffer *ab; int rc = 0; + kpatch_audit_check(); + ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_CONFIG_CHANGE); if (unlikely(!ab)) return rc; kpatch-0.3.2/test/integration/f22/gcc-static-local-var.patch000066400000000000000000000011501266116401600236010ustar00rootroot00000000000000Index: src/arch/x86/kernel/ldt.c =================================================================== --- src.orig/arch/x86/kernel/ldt.c +++ src/arch/x86/kernel/ldt.c @@ -100,6 +100,12 @@ static inline int copy_ldt(mm_context_t return 0; } +void hi_there(void) +{ + if (!jiffies) + printk("hi there\n"); +} + /* * we do not have to muck with descriptors here, that is * done in switch_mm() as needed. @@ -109,6 +115,8 @@ int init_new_context(struct task_struct struct mm_struct *old_mm; int retval = 0; + hi_there(); + mutex_init(&mm->context.lock); mm->context.size = 0; old_mm = current->mm; kpatch-0.3.2/test/integration/f22/macro-hooks-LOADED.test000077500000000000000000000000731266116401600227400ustar00rootroot00000000000000#!/bin/bash [[ $(cat /proc/sys/fs/aio-max-nr) = 262144 ]] kpatch-0.3.2/test/integration/f22/macro-hooks.patch000066400000000000000000000012571266116401600221340ustar00rootroot00000000000000Index: src/fs/aio.c =================================================================== --- src.orig/fs/aio.c +++ src/fs/aio.c @@ -1626,6 +1626,20 @@ SYSCALL_DEFINE3(io_cancel, aio_context_t return ret; } +static int aio_max_nr_orig; +void kpatch_load_aio_max_nr(void) +{ + aio_max_nr_orig = aio_max_nr; + aio_max_nr = 0x40000; +} +void kpatch_unload_aio_max_nr(void) +{ + aio_max_nr = aio_max_nr_orig; +} +#include "kpatch-macros.h" +KPATCH_LOAD_HOOK(kpatch_load_aio_max_nr); +KPATCH_UNLOAD_HOOK(kpatch_unload_aio_max_nr); + /* io_getevents: * Attempts to read at least min_nr events and up to nr events from * the completion queue for the aio_context specified by ctx_id. If kpatch-0.3.2/test/integration/f22/macro-printk.patch000066400000000000000000000111311266116401600223100ustar00rootroot00000000000000Index: src/net/ipv4/fib_frontend.c =================================================================== --- src.orig/net/ipv4/fib_frontend.c +++ src/net/ipv4/fib_frontend.c @@ -686,6 +686,7 @@ errout: return err; } +#include "kpatch-macros.h" static int inet_rtm_newroute(struct sk_buff *skb, struct nlmsghdr *nlh) { struct net *net = sock_net(skb->sk); @@ -704,6 +705,7 @@ static int inet_rtm_newroute(struct sk_b } err = fib_table_insert(tb, &cfg); + KPATCH_PRINTK("[inet_rtm_newroute]: err is %d\n", err); errout: return err; } Index: src/net/ipv4/fib_semantics.c =================================================================== --- src.orig/net/ipv4/fib_semantics.c +++ src/net/ipv4/fib_semantics.c @@ -760,6 +760,7 @@ __be32 fib_info_update_nh_saddr(struct n return nh->nh_saddr; } +#include "kpatch-macros.h" struct fib_info *fib_create_info(struct fib_config *cfg) { int err; @@ -784,6 +785,7 @@ struct fib_info *fib_create_info(struct #endif err = -ENOBUFS; + KPATCH_PRINTK("[fib_create_info]: create error err is %d\n",err); if (fib_info_cnt >= fib_info_hash_size) { unsigned int new_size = fib_info_hash_size << 1; struct hlist_head *new_info_hash; @@ -804,6 +806,7 @@ struct fib_info *fib_create_info(struct if (!fib_info_hash_size) goto failure; } + KPATCH_PRINTK("[fib_create_info]: 2 create error err is %d\n",err); fi = kzalloc(sizeof(*fi)+nhs*sizeof(struct fib_nh), GFP_KERNEL); if (!fi) @@ -815,6 +818,7 @@ struct fib_info *fib_create_info(struct goto failure; } else fi->fib_metrics = (u32 *) dst_default_metrics; + KPATCH_PRINTK("[fib_create_info]: 3 create error err is %d\n",err); fi->fib_net = net; fi->fib_protocol = cfg->fc_protocol; @@ -831,6 +835,7 @@ struct fib_info *fib_create_info(struct if (!nexthop_nh->nh_pcpu_rth_output) goto failure; } endfor_nexthops(fi) + KPATCH_PRINTK("[fib_create_info]: 4 create error err is %d\n",err); if (cfg->fc_mx) { struct nlattr *nla; @@ -862,6 +867,7 @@ struct fib_info *fib_create_info(struct } } } + KPATCH_PRINTK("[fib_create_info]: 5 create error err is %d\n",err); if (cfg->fc_mp) { #ifdef CONFIG_IP_ROUTE_MULTIPATH @@ -894,6 +900,7 @@ struct fib_info *fib_create_info(struct nh->nh_weight = 1; #endif } + KPATCH_PRINTK("[fib_create_info]: 6 create error err is %d\n",err); if (fib_props[cfg->fc_type].error) { if (cfg->fc_gw || cfg->fc_oif || cfg->fc_mp) @@ -911,6 +918,7 @@ struct fib_info *fib_create_info(struct goto err_inval; } } + KPATCH_PRINTK("[fib_create_info]: 7 create error err is %d\n",err); if (cfg->fc_scope > RT_SCOPE_HOST) goto err_inval; @@ -939,6 +947,7 @@ struct fib_info *fib_create_info(struct if (linkdown == fi->fib_nhs) fi->fib_flags |= RTNH_F_LINKDOWN; } + KPATCH_PRINTK("[fib_create_info]: 8 create error err is %d\n",err); if (fi->fib_prefsrc) { if (cfg->fc_type != RTN_LOCAL || !cfg->fc_dst || @@ -950,6 +959,7 @@ struct fib_info *fib_create_info(struct change_nexthops(fi) { fib_info_update_nh_saddr(net, nexthop_nh); } endfor_nexthops(fi) + KPATCH_PRINTK("[fib_create_info]: 9 create error err is %d\n",err); link_it: ofi = fib_find_info(fi); @@ -959,6 +969,7 @@ link_it: ofi->fib_treeref++; return ofi; } + KPATCH_PRINTK("[fib_create_info]: 10 create error err is %d\n",err); fi->fib_treeref++; atomic_inc(&fi->fib_clntref); @@ -982,6 +993,7 @@ link_it: hlist_add_head(&nexthop_nh->nh_hash, head); } endfor_nexthops(fi) spin_unlock_bh(&fib_info_lock); + KPATCH_PRINTK("[fib_create_info]: 11 create error err is %d\n",err); return fi; err_inval: @@ -992,6 +1004,7 @@ failure: fi->fib_dead = 1; free_fib_info(fi); } + KPATCH_PRINTK("[fib_create_info]: 12 create error err is %d\n",err); return ERR_PTR(err); } Index: src/net/ipv4/fib_trie.c =================================================================== --- src.orig/net/ipv4/fib_trie.c +++ src/net/ipv4/fib_trie.c @@ -1077,6 +1077,7 @@ static int fib_insert_alias(struct trie } /* Caller must hold RTNL. */ +#include "kpatch-macros.h" int fib_table_insert(struct fib_table *tb, struct fib_config *cfg) { struct trie *t = (struct trie *)tb->tb_data; @@ -1100,11 +1101,14 @@ int fib_table_insert(struct fib_table *t if ((plen < KEYLENGTH) && (key << plen)) return -EINVAL; + KPATCH_PRINTK("[fib_table_insert]: start\n"); fi = fib_create_info(cfg); if (IS_ERR(fi)) { err = PTR_ERR(fi); + KPATCH_PRINTK("[fib_table_insert]: create error err is %d\n",err); goto err; } + KPATCH_PRINTK("[fib_table_insert]: cross\n"); l = fib_find_node(t, &tp, key); fa = l ? fib_find_alias(&l->leaf, slen, tos, fi->fib_priority, kpatch-0.3.2/test/integration/f22/meminfo-cmdline-rebuild-SLOW-LOADED.test000077500000000000000000000001141266116401600257630ustar00rootroot00000000000000#!/bin/bash grep VMALLOCCHUNK /proc/meminfo && grep kpatch=1 /proc/cmdline kpatch-0.3.2/test/integration/f22/meminfo-cmdline-rebuild-SLOW.patch000066400000000000000000000020741266116401600251610ustar00rootroot00000000000000Index: src/fs/proc/cmdline.c =================================================================== --- src.orig/fs/proc/cmdline.c +++ src/fs/proc/cmdline.c @@ -5,7 +5,7 @@ static int cmdline_proc_show(struct seq_file *m, void *v) { - seq_printf(m, "%s\n", saved_command_line); + seq_printf(m, "%s kpatch=1\n", saved_command_line); return 0; } Index: src/fs/proc/meminfo.c =================================================================== --- src.orig/fs/proc/meminfo.c +++ src/fs/proc/meminfo.c @@ -131,7 +131,7 @@ static int meminfo_proc_show(struct seq_ "Committed_AS: %8lu kB\n" "VmallocTotal: %8lu kB\n" "VmallocUsed: %8lu kB\n" - "VmallocChunk: %8lu kB\n" + "VMALLOCCHUNK: %8lu kB\n" #ifdef CONFIG_MEMORY_FAILURE "HardwareCorrupted: %5lu kB\n" #endif Index: src/include/linux/kernel.h =================================================================== --- src.orig/include/linux/kernel.h +++ src/include/linux/kernel.h @@ -2,6 +2,7 @@ #define _LINUX_KERNEL_H + #include #include #include kpatch-0.3.2/test/integration/f22/meminfo-init-FAIL.patch000066400000000000000000000005361266116401600230150ustar00rootroot00000000000000Index: src/fs/proc/meminfo.c =================================================================== --- src.orig/fs/proc/meminfo.c +++ src/fs/proc/meminfo.c @@ -180,6 +180,7 @@ static const struct file_operations memi static int __init proc_meminfo_init(void) { + printk("a\n"); proc_create("meminfo", 0, NULL, &meminfo_proc_fops); return 0; } kpatch-0.3.2/test/integration/f22/meminfo-init2-FAIL.patch000066400000000000000000000007771266116401600231060ustar00rootroot00000000000000Index: src/fs/proc/meminfo.c =================================================================== --- src.orig/fs/proc/meminfo.c +++ src/fs/proc/meminfo.c @@ -29,6 +29,7 @@ static int meminfo_proc_show(struct seq_ unsigned long pages[NR_LRU_LISTS]; int lru; + printk("a\n"); /* * display in kilobytes. */ @@ -180,6 +181,7 @@ static const struct file_operations memi static int __init proc_meminfo_init(void) { + printk("a\n"); proc_create("meminfo", 0, NULL, &meminfo_proc_fops); return 0; } kpatch-0.3.2/test/integration/f22/meminfo-string-LOADED.test000077500000000000000000000000551266116401600234540ustar00rootroot00000000000000#!/bin/bash grep VMALLOCCHUNK /proc/meminfo kpatch-0.3.2/test/integration/f22/meminfo-string.patch000066400000000000000000000006701266116401600226460ustar00rootroot00000000000000Index: src/fs/proc/meminfo.c =================================================================== --- src.orig/fs/proc/meminfo.c +++ src/fs/proc/meminfo.c @@ -95,7 +95,7 @@ static int meminfo_proc_show(struct seq_ "Committed_AS: %8lu kB\n" "VmallocTotal: %8lu kB\n" "VmallocUsed: %8lu kB\n" - "VmallocChunk: %8lu kB\n" + "VMALLOCCHUNK: %8lu kB\n" #ifdef CONFIG_MEMORY_FAILURE "HardwareCorrupted: %5lu kB\n" #endif kpatch-0.3.2/test/integration/f22/module-call-external.patch000066400000000000000000000016341266116401600237270ustar00rootroot00000000000000Index: src/fs/nfsd/export.c =================================================================== --- src.orig/fs/nfsd/export.c +++ src/fs/nfsd/export.c @@ -1241,6 +1241,8 @@ static void exp_flags(struct seq_file *m } } +extern char *kpatch_string(void); + static int e_show(struct seq_file *m, void *p) { struct cache_head *cp = p; @@ -1250,6 +1252,7 @@ static int e_show(struct seq_file *m, vo if (p == SEQ_START_TOKEN) { seq_puts(m, "# Version 1.1\n"); seq_puts(m, "# Path Client(Flags) # IPs\n"); + seq_puts(m, kpatch_string()); return 0; } Index: src/net/netlink/af_netlink.c =================================================================== --- src.orig/net/netlink/af_netlink.c +++ src/net/netlink/af_netlink.c @@ -3228,4 +3228,9 @@ panic: panic("netlink_init: Cannot allocate nl_table\n"); } +char *kpatch_string(void) +{ + return "# kpatch\n"; +} + core_initcall(netlink_proto_init); kpatch-0.3.2/test/integration/f22/module-kvm-fixup.patch000066400000000000000000000006231266116401600231170ustar00rootroot00000000000000Index: src/arch/x86/kvm/vmx.c =================================================================== --- src.orig/arch/x86/kvm/vmx.c +++ src/arch/x86/kvm/vmx.c @@ -8774,6 +8774,8 @@ static int vmx_check_intercept(struct kv struct x86_instruction_info *info, enum x86_intercept_stage stage) { + if (!jiffies) + printk("kpatch vmx_check_intercept\n"); return X86EMUL_CONTINUE; } kpatch-0.3.2/test/integration/f22/module-shadow.patch000066400000000000000000000015701266116401600224600ustar00rootroot00000000000000Index: src/arch/x86/kvm/vmx.c =================================================================== --- src.orig/arch/x86/kvm/vmx.c +++ src/arch/x86/kvm/vmx.c @@ -8758,10 +8758,20 @@ static void vmx_leave_nested(struct kvm_ * It should only be called before L2 actually succeeded to run, and when * vmcs01 is current (it doesn't leave_guest_mode() or switch vmcss). */ +#include "kpatch.h" static void nested_vmx_entry_failure(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12, u32 reason, unsigned long qualification) { + int *kpatch; + + kpatch = kpatch_shadow_alloc(vcpu, "kpatch", sizeof(*kpatch), + GFP_KERNEL); + if (kpatch) { + kpatch_shadow_get(vcpu, "kpatch"); + kpatch_shadow_free(vcpu, "kpatch"); + } + load_vmcs12_host_state(vcpu, vmcs12); vmcs12->vm_exit_reason = reason | VMX_EXIT_REASONS_FAILED_VMENTRY; vmcs12->exit_qualification = qualification; kpatch-0.3.2/test/integration/f22/multiple.test000077500000000000000000000016171266116401600214300ustar00rootroot00000000000000#!/bin/bash SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" ROOTDIR="$(readlink -f $SCRIPTDIR/../../..)" KPATCH="sudo $ROOTDIR/kpatch/kpatch" set -o errexit die() { echo "ERROR: $@" >&2 exit 1 } ko_to_test() { tmp=${1%.ko}-LOADED.test echo ${tmp#kpatch-} } # make sure any modules added here are disjoint declare -a modules=(kpatch-cmdline-string.ko kpatch-meminfo-string.ko) for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) ./$testprog && die "./$testprog succeeded before loading any modules" done for mod in "${modules[@]}"; do $KPATCH load $mod done for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) ./$testprog || die "./$testprog failed after loading modules" done for mod in "${modules[@]}"; do $KPATCH unload $mod done for mod in "${modules[@]}"; do testprog=$(ko_to_test $mod) ./$testprog && die "./$testprog succeeded after unloading modules" done exit 0 kpatch-0.3.2/test/integration/f22/new-function.patch000066400000000000000000000014541266116401600223250ustar00rootroot00000000000000Index: src/drivers/tty/n_tty.c =================================================================== --- src.orig/drivers/tty/n_tty.c +++ src/drivers/tty/n_tty.c @@ -2304,7 +2304,7 @@ static ssize_t n_tty_read(struct tty_str * lock themselves) */ -static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, +static ssize_t noinline kpatch_n_tty_write(struct tty_struct *tty, struct file *file, const unsigned char *buf, size_t nr) { const unsigned char *b = buf; @@ -2393,6 +2393,12 @@ break_out: return (b - buf) ? b - buf : retval; } +static ssize_t n_tty_write(struct tty_struct *tty, struct file *file, + const unsigned char *buf, size_t nr) +{ + return kpatch_n_tty_write(tty, file, buf, nr); +} + /** * n_tty_poll - poll method for N_TTY * @tty: terminal device kpatch-0.3.2/test/integration/f22/new-globals.patch000066400000000000000000000016471266116401600221270ustar00rootroot00000000000000Index: src/fs/proc/cmdline.c =================================================================== --- src.orig/fs/proc/cmdline.c +++ src/fs/proc/cmdline.c @@ -27,3 +27,10 @@ static int __init proc_cmdline_init(void return 0; } fs_initcall(proc_cmdline_init); + +#include +void kpatch_print_message(void) +{ + if (!jiffies) + printk("hello there!\n"); +} Index: src/fs/proc/meminfo.c =================================================================== --- src.orig/fs/proc/meminfo.c +++ src/fs/proc/meminfo.c @@ -16,6 +16,8 @@ #include #include "internal.h" +void kpatch_print_message(void); + void __attribute__((weak)) arch_report_meminfo(struct seq_file *m) { } @@ -85,6 +87,7 @@ static int meminfo_proc_show(struct seq_ /* * Tagged format, for easy grepping and expansion. */ + kpatch_print_message(); seq_printf(m, "MemTotal: %8lu kB\n" "MemFree: %8lu kB\n" kpatch-0.3.2/test/integration/f22/parainstructions-section.patch000066400000000000000000000006121266116401600247560ustar00rootroot00000000000000Index: src/fs/proc/generic.c =================================================================== --- src.orig/fs/proc/generic.c +++ src/fs/proc/generic.c @@ -132,6 +132,7 @@ int proc_alloc_inum(unsigned int *inum) unsigned int i; int error; + printk("kpatch-test: testing change to .parainstructions section\n"); retry: if (!ida_pre_get(&proc_inum_ida, GFP_KERNEL)) return -ENOMEM; kpatch-0.3.2/test/integration/f22/remote-setup000077500000000000000000000033431266116401600212460ustar00rootroot00000000000000#!/bin/bash -x # install rpms on a Fedora 22 system to prepare it for kpatch integration tests set -o errexit [[ $UID != 0 ]] && sudo=sudo warn() { echo "ERROR: $1" >&2 } die() { warn "$@" exit 1 } install_rpms() { # crude workaround for a weird dnf bug where it fails to download $sudo dnf install -y $* || $sudo dnf install -y $* } install_rpms gcc elfutils elfutils-devel rpmdevtools pesign openssl numactl-devel wget patchutils $sudo dnf builddep -y kernel || $sudo dnf builddep -y kernel # install kernel debuginfo and devel RPMs for target kernel kverrel=$(uname -r) kverrel=${kverrel%.x86_64} kver=${kverrel%%-*} krel=${kverrel#*-} install_rpms https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-debuginfo-$kver-$krel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-debuginfo-common-x86_64-$kver-$krel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/kernel/$kver/$krel/x86_64/kernel-devel-$kver-$krel.x86_64.rpm # install version of gcc which was used to build the target kernel gccver=$(gcc --version |head -n1 |cut -d' ' -f3-) kgccver=$(readelf -p .comment /usr/lib/debug/lib/modules/$(uname -r)/vmlinux |grep GCC: | tr -s ' ' | cut -d ' ' -f6-) if [[ $gccver != $kgccver ]]; then gver=$(echo $kgccver | awk '{print $1}') grel=$(echo $kgccver | sed 's/.*-\(.*\))/\1/') grel=$grel.$(rpm -q gcc |sed 's/.*\.\(.*\)\.x86_64/\1/') install_rpms https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/cpp-$gver-$grel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/gcc-$gver-$grel.x86_64.rpm https://kojipkgs.fedoraproject.org/packages/gcc/$gver/$grel/x86_64/libgomp-$gver-$grel.x86_64.rpm fi install_rpms ccache ccache -M 5G kpatch-0.3.2/test/integration/f22/replace-section-references.patch000066400000000000000000000007001266116401600250760ustar00rootroot00000000000000Index: src/arch/x86/kvm/x86.c =================================================================== --- src.orig/arch/x86/kvm/x86.c +++ src/arch/x86/kvm/x86.c @@ -218,6 +218,8 @@ static void shared_msr_update(unsigned s void kvm_define_shared_msr(unsigned slot, u32 msr) { + if (!jiffies) + printk("kpatch kvm define shared msr\n"); BUG_ON(slot >= KVM_NR_SHARED_MSRS); if (slot >= shared_msrs_global.nr) shared_msrs_global.nr = slot + 1; kpatch-0.3.2/test/integration/f22/shadow-newpid-LOADED.test000077500000000000000000000000551266116401600232670ustar00rootroot00000000000000#!/bin/bash grep -q newpid: /proc/$$/status kpatch-0.3.2/test/integration/f22/shadow-newpid.patch000066400000000000000000000040011266116401600224510ustar00rootroot00000000000000Index: src/kernel/fork.c =================================================================== --- src.orig/kernel/fork.c +++ src/kernel/fork.c @@ -1676,6 +1676,7 @@ struct task_struct *fork_idle(int cpu) * It copies the process, and if successful kick-starts * it and waits for it to finish using the VM if required. */ +#include "kpatch.h" long _do_fork(unsigned long clone_flags, unsigned long stack_start, unsigned long stack_size, @@ -1714,6 +1715,13 @@ long _do_fork(unsigned long clone_flags, if (!IS_ERR(p)) { struct completion vfork; struct pid *pid; + int *newpid; + static int ctr = 0; + + newpid = kpatch_shadow_alloc(p, "newpid", sizeof(*newpid), + GFP_KERNEL); + if (newpid) + *newpid = ctr++; trace_sched_process_fork(current, p); Index: src/fs/proc/array.c =================================================================== --- src.orig/fs/proc/array.c +++ src/fs/proc/array.c @@ -331,13 +331,20 @@ static inline void task_seccomp(struct s #endif } +#include "kpatch.h" static inline void task_context_switch_counts(struct seq_file *m, struct task_struct *p) { + int *newpid; + seq_printf(m, "voluntary_ctxt_switches:\t%lu\n" "nonvoluntary_ctxt_switches:\t%lu\n", p->nvcsw, p->nivcsw); + + newpid = kpatch_shadow_get(p, "newpid"); + if (newpid) + seq_printf(m, "newpid:\t%d\n", *newpid); } static void task_cpus_allowed(struct seq_file *m, struct task_struct *task) Index: src/kernel/exit.c =================================================================== --- src.orig/kernel/exit.c +++ src/kernel/exit.c @@ -650,6 +650,7 @@ static void check_stack_usage(void) static inline void check_stack_usage(void) {} #endif +#include "kpatch.h" void do_exit(long code) { struct task_struct *tsk = current; @@ -746,6 +747,8 @@ void do_exit(long code) exit_task_work(tsk); exit_thread(); + kpatch_shadow_free(tsk, "newpid"); + /* * Flush inherited counters to the parent - before the parent * gets woken up by child-exit notifications. kpatch-0.3.2/test/integration/f22/smp-locks-section.patch000066400000000000000000000006731266116401600232650ustar00rootroot00000000000000Index: src/drivers/tty/tty_buffer.c =================================================================== --- src.orig/drivers/tty/tty_buffer.c +++ src/drivers/tty/tty_buffer.c @@ -247,6 +247,8 @@ static int __tty_buffer_request_room(str struct tty_buffer *b, *n; int left, change; + if (!size) + printk("kpatch-test: testing .smp_locks section changes\n"); b = buf->tail; if (b->flags & TTYB_NORMAL) left = 2 * b->size - b->used; kpatch-0.3.2/test/integration/f22/special-static-2.patch000066400000000000000000000011611266116401600227500ustar00rootroot00000000000000Index: src/arch/x86/kvm/x86.c =================================================================== --- src.orig/arch/x86/kvm/x86.c +++ src/arch/x86/kvm/x86.c @@ -2011,12 +2011,20 @@ static void record_steal_time(struct kvm &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)); } +void kpatch_kvm_x86_foo(void) +{ + if (!jiffies) + printk("kpatch kvm x86 foo\n"); +} + int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) { bool pr = false; u32 msr = msr_info->index; u64 data = msr_info->data; + kpatch_kvm_x86_foo(); + switch (msr) { case MSR_AMD64_NB_CFG: case MSR_IA32_UCODE_REV: kpatch-0.3.2/test/integration/f22/special-static.patch000066400000000000000000000007761266116401600226240ustar00rootroot00000000000000Index: src/kernel/fork.c =================================================================== --- src.orig/kernel/fork.c +++ src/kernel/fork.c @@ -1029,10 +1029,18 @@ static void posix_cpu_timers_init_group( INIT_LIST_HEAD(&sig->cpu_timers[2]); } +void kpatch_foo(void) +{ + if (!jiffies) + printk("kpatch copy signal\n"); +} + static int copy_signal(unsigned long clone_flags, struct task_struct *tsk) { struct signal_struct *sig; + kpatch_foo(); + if (clone_flags & CLONE_THREAD) return 0; kpatch-0.3.2/test/integration/f22/tracepoints-section.patch000066400000000000000000000010111266116401600236730ustar00rootroot00000000000000ensure __jump_table is parsed and we can tell that it effectively didn't change Index: src/kernel/time/timer.c =================================================================== --- src.orig/kernel/time/timer.c +++ src/kernel/time/timer.c @@ -1410,6 +1410,9 @@ static void run_timer_softirq(struct sof { struct tvec_base *base = this_cpu_ptr(&tvec_bases); + if (!base) + printk("kpatch-test: testing __tracepoints section changes\n"); + if (time_after_eq(jiffies, base->timer_jiffies)) __run_timers(base); } kpatch-0.3.2/test/integration/f22/warn-detect-FAIL.patch000066400000000000000000000003511266116401600226320ustar00rootroot00000000000000Index: src/arch/x86/kvm/x86.c =================================================================== --- src.orig/arch/x86/kvm/x86.c +++ src/arch/x86/kvm/x86.c @@ -1,3 +1,4 @@ + /* * Kernel-based Virtual Machine driver for Linux * kpatch-0.3.2/test/integration/kpatch-test000077500000000000000000000154311266116401600204540ustar00rootroot00000000000000#!/bin/bash # # kpatch integration test framework # # Copyright (C) 2014 Josh Poimboeuf # # This program is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License # as published by the Free Software Foundation; either version 2 # of the License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA, # 02110-1301, USA. # # # This is a basic integration test framework for kpatch, which tests building, # loading, and unloading patches, as well as any other related custom tests. # # This script looks for test input files in the current directory. It expects # certain file naming conventions: # # - foo.patch: patch that should build successfully # # - foo-SLOW.patch: patch that should be skipped in the quick test # # - bar-FAIL.patch: patch that should fail to build # # - foo-LOADED.test: executable which tests whether the foo.patch module is # loaded. It will be used to test that loading/unloading the patch module # works as expected. # # Any other *.test files will be executed after all the patch modules have been # built from the *.patch files. They can be used for more custom tests above # and beyond the simple loading and unloading tests. SCRIPTDIR="$(readlink -f $(dirname $(type -p $0)))" ROOTDIR="$(readlink -f $SCRIPTDIR/../..)" # TODO: option to use system-installed binaries instead KPATCH="sudo $ROOTDIR/kpatch/kpatch" RMMOD="sudo rmmod" unset CCACHE_HASHDIR KPATCHBUILD="$ROOTDIR"/kpatch-build/kpatch-build ERROR=0 LOG=test.log rm -f $LOG usage() { echo "usage: $0 [options]" >&2 echo " -h, --help Show this help message" >&2 echo " -c, --cached Don't rebuild patch modules" >&2 echo " -q, --quick Just combine all patches into one module for testing" >&2 } options=$(getopt -o hcq -l "help,cached,quick" -- "$@") || exit 1 eval set -- "$options" while [[ $# -gt 0 ]]; do case "$1" in -h|--help) usage exit 0 ;; -c|--cached) SKIPBUILD=1 ;; -q|--quick) QUICK=1 ;; esac shift done error() { echo "ERROR: $@" |tee -a $LOG >&2 ERROR=$((ERROR + 1)) } log() { echo "$@" |tee -a $LOG } unload_all() { for i in `lsmod |egrep '^kpatch' |awk '{print $1}'`; do if [[ $i != kpatch ]]; then $KPATCH unload $i >> $LOG 2>&1 || error "\"kpatch unload $i\" failed" fi done if lsmod |egrep -q '^kpatch'; then $RMMOD kpatch >> $LOG 2>&1 || error "\"rmmod kpatch\" failed" fi } build_module() { file=$1 prefix=${file%%.patch} module=kpatch-$prefix.ko [[ $prefix = COMBINED ]] && return if [[ $prefix =~ -FAIL ]]; then shouldfail=1 else shouldfail=0 fi if [[ $SKIPBUILD -eq 1 ]]; then skip=0 [[ $shouldfail -eq 1 ]] && skip=1 [[ -e $module ]] && skip=1 [[ $skip -eq 1 ]] && log "skipping build: $prefix" && return fi log "build: $prefix" if ! $KPATCHBUILD $file >> $LOG 2>&1; then [[ $shouldfail -eq 0 ]] && error "$prefix: build failed" else [[ $shouldfail -eq 1 ]] && error "$prefix: build succeeded when it should have failed" fi } run_load_test() { file=$1 prefix=${file%%.patch} module=kpatch-$prefix.ko testprog=$prefix-LOADED.test [[ $prefix = COMBINED ]] && return [[ $prefix =~ -FAIL ]] && return if [[ ! -e $module ]]; then log "can't find $module, skipping" return fi if [[ -e $testprog ]]; then log "load test: $prefix" else log "load test: $prefix (no test prog)" fi if [[ -e $testprog ]] && ./$testprog >> $LOG 2>&1; then error "$prefix: $testprog succeeded before kpatch load" return fi if ! $KPATCH load $module >> $LOG 2>&1; then error "$prefix: kpatch load failed" return fi if [[ -e $testprog ]] && ! ./$testprog >> $LOG 2>&1; then error "$prefix: $testprog failed after kpatch load" fi if ! $KPATCH unload $module >> $LOG 2>&1; then error "$prefix: kpatch unload failed" return fi if [[ -e $testprog ]] && ./$testprog >> $LOG 2>&1; then error "$prefix: $testprog succeeded after kpatch unload" return fi } run_custom_test() { testprog=$1 prefix=${file%%.test} [[ $testprog = *-LOADED.test ]] && return log "custom test: $prefix" if ! ./$testprog >> $LOG 2>&1; then error "$prefix: test failed" fi } build_combined_module() { if [[ $SKIPBUILD -eq 1 ]] && [[ -e kpatch-COMBINED.ko ]]; then log "skipping build: combined" return fi if ! which combinediff > /dev/null; then log "patchutils not installed, skipping combined module build" error "PLEASE INSTALL PATCHUTILS" return fi rm -f COMBINED.patch TMP.patch first=1 for file in *.patch; do prefix=${file%%.patch} [[ $prefix =~ -FAIL ]] && continue [[ $prefix =~ -SLOW ]] && continue log "combine: $prefix" if [[ $first -eq 1 ]]; then cp -f $file COMBINED.patch first=0 continue fi combinediff COMBINED.patch $file > TMP.patch mv -f TMP.patch COMBINED.patch done log "build: combined module" if ! $KPATCHBUILD COMBINED.patch >> $LOG 2>&1; then error "combined build failed" fi } run_combined_test() { if [[ ! -e kpatch-COMBINED.ko ]]; then log "can't find kpatch-COMBINED.ko, skipping" return fi log "load test: combined module" unload_all for testprog in *.test; do [[ $testprog != *-LOADED.test ]] && continue if ./$testprog >> $LOG 2>&1; then error "combined: $testprog succeeded before kpatch load" return fi done if ! $KPATCH load kpatch-COMBINED.ko >> $LOG 2>&1; then error "combined: kpatch load failed" return fi for testprog in *.test; do [[ $testprog != *-LOADED.test ]] && continue if ! ./$testprog >> $LOG 2>&1; then error "combined: $testprog failed after kpatch load" fi done if ! $KPATCH unload kpatch-COMBINED.ko >> $LOG 2>&1; then error "combined: kpatch unload failed" return fi for testprog in *.test; do [[ $testprog != *-LOADED.test ]] && continue if ./$testprog >> $LOG 2>&1; then error "combined: $testprog succeeded after kpatch unload" return fi done } echo "clearing printk buffer" sudo dmesg -C if [[ $QUICK != 1 ]]; then for file in *.patch; do build_module $file done fi build_combined_module unload_all if [[ $QUICK != 1 ]]; then for file in *.patch; do run_load_test $file done fi run_combined_test if [[ $QUICK != 1 ]]; then for file in *.test; do unload_all run_custom_test $file done fi unload_all dmesg |grep -q "Call Trace" && error "kernel error detected in printk buffer" if [[ $ERROR -gt 0 ]]; then log "$ERROR errors encountered" echo "see test.log for more information" else log "SUCCESS" fi exit $ERROR kpatch-0.3.2/test/testmod/000077500000000000000000000000001266116401600154275ustar00rootroot00000000000000kpatch-0.3.2/test/testmod/Makefile000066400000000000000000000012321266116401600170650ustar00rootroot00000000000000BUILD ?= /lib/modules/$(shell uname -r)/build testmod.ko: testmod_drv.c patch < patch KCFLAGS="-ffunction-sections -fdata-sections" $(MAKE) -C $(BUILD) M=$(PWD) testmod.ko strip --keep-file-symbols -d testmod_drv.o cp testmod_drv.o testmod_drv.o.patched patch -R < patch KCFLAGS="-ffunction-sections -fdata-sections" $(MAKE) -C $(BUILD) M=$(PWD) testmod.ko strip --keep-file-symbols -d testmod_drv.o cp testmod_drv.o testmod_drv.o.orig $(MAKE) -C $(BUILD) M=$(PWD) clean $(MAKE) -C $(BUILD) M=$(PWD) testmod.ko all: testmod.ko clean: $(MAKE) -C $(BUILD) M=$(PWD) clean rm *.orig *.patched # kbuild rules obj-m := testmod.o testmod-y := testmod_drv.o kpatch-0.3.2/test/testmod/README000066400000000000000000000002171266116401600163070ustar00rootroot00000000000000To test, run ./doit.sh from the current directory. To test on a remote system, set remote system using REMOTE in doit.sh. Then run ./doit.sh. kpatch-0.3.2/test/testmod/doit-client.sh000077500000000000000000000007451266116401600202070ustar00rootroot00000000000000#!/bin/bash #set -x rmmod testmod 2> /dev/null rmmod kpatch 2> /dev/null insmod testmod.ko || exit 1 insmod kpatch.ko || exit 1 if [[ "$(cat /sys/kernel/testmod/value)" != "2" ]] then exit 1 fi insmod kpatch-patch.ko dmesg | tail if [[ "$(cat /sys/kernel/testmod/value)" != "3" ]] then exit 1 fi echo 0 > /sys/kernel/kpatch/patches/kpatch_patch/enabled rmmod kpatch-patch if [[ "$(cat /sys/kernel/testmod/value)" != "2" ]] then exit 1 fi rmmod kpatch rmmod testmod echo "SUCCESS" kpatch-0.3.2/test/testmod/doit.sh000077500000000000000000000022051266116401600167240ustar00rootroot00000000000000#!/bin/bash #set -x # If testing on a remote machine, set it here # Probably want to use preshared keys. unset REMOTE #REMOTE="192.168.100.150" cd ../../ || exit 1 make clean || exit 1 make || exit 1 cd test/testmod || exit 1 make || exit 1 ../../kpatch-build/create-diff-object testmod_drv.o.orig testmod_drv.o.patched testmod.ko output.o || exit 1 cd ../../kmod/patch || exit 1 make clean || exit 1 cp ../../test/testmod/output.o . || exit 1 md5sum output.o | awk '{printf "%s\0", $1}' > checksum.tmp || exit 1 objcopy --add-section .kpatch.checksum=checksum.tmp --set-section-flags .kpatch.checksum=alloc,load,contents,readonly output.o || exit 1 rm -f checksum.tmp KBUILD_EXTRA_SYMBOLS="$(readlink -e ../../kmod/core/Module.symvers)" make || exit 1 cd ../../test/testmod if [[ -z "$REMOTE" ]] then cp ../../kmod/core/kpatch.ko . cp ../../kmod/patch/kpatch-patch.ko . sudo ./doit-client.sh else scp ../../kmod/core/kpatch.ko root@$REMOTE:~/. || exit 1 scp ../../kmod/patch/kpatch-patch.ko root@$REMOTE:~/. || exit 1 scp testmod.ko root@$REMOTE:~/. || exit 1 scp doit-client.sh root@$REMOTE:~/. || exit 1 ssh root@$REMOTE ./doit-client.sh fi kpatch-0.3.2/test/testmod/patch000066400000000000000000000006221266116401600164510ustar00rootroot00000000000000--- testmod_drv.c.orig 2014-06-02 16:49:49.428509600 -0500 +++ testmod_drv.c 2014-06-02 16:49:56.973656791 -0500 @@ -11,7 +11,7 @@ static ssize_t value_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { - return sprintf(buf, "%d\n", value); + return sprintf(buf, "%d\n", value+1); } static struct kobj_attribute testmod_value_attr = __ATTR_RO(value); kpatch-0.3.2/test/testmod/testmod_drv.c000066400000000000000000000016121266116401600201250ustar00rootroot00000000000000#define pr_fmt(fmt) "testmod: " fmt #include #include #include #include static struct kobject *testmod_kobj; int value = 2; static ssize_t value_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { return sprintf(buf, "%d\n", value); } static struct kobj_attribute testmod_value_attr = __ATTR_RO(value); static int testmod_init(void) { int ret; testmod_kobj = kobject_create_and_add("testmod", kernel_kobj); if (!testmod_kobj) return -ENOMEM; ret = sysfs_create_file(testmod_kobj, &testmod_value_attr.attr); if (ret) { kobject_put(testmod_kobj); return ret; } return 0; } static void testmod_exit(void) { sysfs_remove_file(testmod_kobj, &testmod_value_attr.attr); kobject_put(testmod_kobj); } module_init(testmod_init); module_exit(testmod_exit); MODULE_LICENSE("GPL");