pax_global_header00006660000000000000000000000064127677011500014520gustar00rootroot0000000000000052 comment=1f828b97caf42332f0ff80d05b36da99296765dc dm-writeboost-2.2.6/000077500000000000000000000000001276770115000143265ustar00rootroot00000000000000dm-writeboost-2.2.6/ChangeLog000066400000000000000000000061601276770115000161030ustar00rootroot000000000000002016-09-19 Akira Hayakawa * v2.2.6 * Clarify producer-consumer pattern * Fix build error with 3.10 kernel * Fix build error with 3.14 kernel 2016-09-12 Akira Hayakawa * v2.2.5 * Fix read-caching data corruption issue * Insert memory barriers * Code cleanup 2016-08-28 Akira Hayakawa * v2.2.4 * Fix update_sb_record_interval * Throttle writeback when there are only few empty segments in the caching device * Remove experimental from read-caching 2016-08-02 Akira Hayakawa * v2.2.3 * Rename write_through_mode to write_around_mode because it's more precise * Reformat the caching device when it's write_around_mode 2016-07-30 Akira Hayakawa * v2.2.2 * Use kmap_atomic() to access the bio payload * Fix doc (clear_stat) 2016-07-18 Akira Hayakawa * v2.2.1 * Unsupport TRIM * Fixes (fail if partial read from caching device fails etc.) 2016-05-01 Akira Hayakawa * v2.2.0 * Remove partial writeback in foreground. This results in writing back cached data strictly from the older ones, which makes cache device corruption safer * Fix build error for kernel 4.6. per_bio_data_size is renamed to per_io_data_size * Remove SECTOR_SHIFT 2016-03-05 Akira Hayakawa * v2.1.2 * Remove blockup mechanism * Use vmalloc for read_cache_cell's buffer 2016-01-04 Akira Hayakawa * v2.1.1 * Define bio_endio_compat * Update copyright date * Update/fix docs 2015-08-02 Akira Hayakawa * v2.1.0 * Remove ACCESS_ONCE around cell->cancelled * Change the type of cell->cancelled from int to bool * Fix dmsetup table * Add write_through_mode 2015-07-28 Akira Hayakawa * v2.0.6 * Use vmalloc for rambuf and writeback_segs * Fix location of might_queue_current_buffer() (this is a good refactoring too) * Fix inject_read_cache so it checks cell->cancelled inside mutex. * Fix comment (ctr) 2015-07-20 Akira Hayakawa * v2.0.5 * Add __GFP_NOWARN to allocation of writeback ios * Use vmalloc for large_array struct 2015-07-15 Akira Hayakawa * v2.0.4 * Fast-path for clean initialization * Restrict the nr_max_batched_writeback 2015-07-13 Akira Hayakawa * v2.0.3 * Use separate wq for barrier flush 2015-07-12 Akira Hayakawa * v2.0.2 * Fix the crc32c wrapper so it complements the computed value. 2015-07-09 Akira Hayakawa * v2.0.1 * Fix for "mkfs.xfs -m crc=1" issue. Add copy_bio_payload(). * Fix end_io not to ignore error. * Fix bad pointer access in try_alloc_writeback_ios(). 2015-06-16 Akira Hayakawa * v2.0.0 * Design change. Purge static optional args (nr_rambuf_pool, segment_size_order) so as to work well with Dmitry's tool. 2015-05-14 Akira Hayakawa * v1.0.1 * Fix read-caching that didn't hit at all. 2015-05-10 Akira Hayakawa * v1.0.0 dm-writeboost-2.2.6/LICENSE000066400000000000000000000431771276770115000153470ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. {description} Copyright (C) {year} {fullname} This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. {signature of Ty Coon}, 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. dm-writeboost-2.2.6/Makefile000066400000000000000000000004471276770115000157730ustar00rootroot00000000000000MODULE_VERSION ?= 2.2.6 DKMS_DIR := /usr/src/dm-writeboost-$(MODULE_VERSION) DKMS_KEY := -m dm-writeboost -v $(MODULE_VERSION) install: cp -r src $(DKMS_DIR) dkms add $(DKMS_KEY) dkms build $(DKMS_KEY) dkms install $(DKMS_KEY) uninstall: dkms remove --all $(DKMS_KEY) rm -rf $(DKMS_DIR) dm-writeboost-2.2.6/README.md000066400000000000000000000110301276770115000156000ustar00rootroot00000000000000# dm-writeboost [![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/akiradeveloper/dm-writeboost?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) Log-structured Caching for Linux ## Overview dm-writeboost is originated from [Disk Caching Disk(DCD)](http://www.ele.uri.edu/research/hpcl/DCD/DCD.html). DCD, implemented in Solaris, is an OS-level IO controller that builds logs from in-coming writes (data and metadata) and then writes the logs sequentially similar to log-structured filesystem. As a further extension, dm-writeboost supports read-caching which also writes data sequentially. ## Documents - doc/dm-writeboost-readme.txt - [dm-writeboost-internal](https://docs.google.com/presentation/d/1mDh5ct3OR-eRxBbci3LQgaTvUFx9WTLw-kkBxNBeTD8/edit?usp=sharing) - [dm-writeboost-for-admin](https://docs.google.com/presentation/d/1v-L8Ma138o7jNBFqRl0epyc1Lji3XhUH1RGj8p7DVe8/edit?usp=sharing) ## Features * **Durable**: Any power failure can't break consistency because each log consists of data, metadata and the checksum of the log itself. * **Lifetime**: Other caching software (e.g. dm-cache) separates data and metadata and therefore submits writes to SSD too frequently. dm-writeboost, on the other hand, submits only one writes for hundreds of data and metadata updates so the SSD lives longer since SSD's lifetime depends on how many writes are submitted. * **Fast**: Since the sequential write is the best I/O pattern for every SSD and the code base is optimized for in-coming random writes, the write performance is the best of all caching drivers including dm-cache and bcache. * **Portable**: Kernel version 3.10 or later is supported with minimum compile-time macros. ## Usage - **Install**: `sudo make install` to install and `sudo make uninstall` to uninstall. `sudo make uninstall MODULE_VERSION=xxx` can uninstall specific version that's installed. DKMS is required so please install it beforehand. (usually available in package system) - **Make a device**: Make a script to build a caching device. Please read doc/dm-writeboost-readme.txt for the dmsetup command detail. You also need to rebuild the caching device after reboot. To do this, cron's @reboot is recommended but you can use systemd or sysvinit. Note you don't need to prepare anything for system shutdown because dm-writeboost is even durable even against sudden power failure. ## Distribution Packages - Debian: [Stretch](https://packages.debian.org/source/testing/dm-writeboost), [Sid](https://packages.debian.org/source/sid/dm-writeboost) - Ubuntu: [Yakkety](http://packages.ubuntu.com/yakkety/kernel/dm-writeboost-dkms), [Xenial](http://packages.ubuntu.com/xenial/dm-writeboost-dkms), [Wily](http://packages.ubuntu.com/wily/dm-writeboost-dkms) - [Tanglu](http://packages.tanglu.org/ja/dasyatis/kernel/dm-writeboost-dkms) - Momonga ## Related Projects * https://github.com/akiradeveloper/dm-writeboost-tools: Tools to help users analyze the state of the cache device * https://gitlab.com/onlyjob/writeboost: A management tool including init script * https://github.com/akiradeveloper/writeboost-test-suite: Testing framework written in Scala ## Related works * Y. Hu and Q. Yang -- DCD Disk Caching Disk: A New Approach for Boosting I/O Performance (1995) (http://www.ele.uri.edu/research/hpcl/DCD/DCD.html) * G. Soundararajan et. al. -- Extending SSD Lifetimes with Disk-Based Write Caches (2010) (https://www.usenix.org/conference/fast-10/extending-ssd-lifetimes-disk-based-write-caches) * Y. Oh -- SSD RAID as Cache (SRC) with Log-structured Approach for Performance and Reliability (2014) (https://ysoh.files.wordpress.com/2009/05/dm-src-ibm.pdf) ## Award Awarded by Japanese OSS Encouragement Award. Thanks! ## License ``` Copyright (C) 2012-2016 Akira Hayakawa This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. ``` ## Developer Info Akira Hayakawa (@akiradeveloper) e-mail: ruby.wktk@gmail.com dm-writeboost-2.2.6/doc/000077500000000000000000000000001276770115000150735ustar00rootroot00000000000000dm-writeboost-2.2.6/doc/dm-writeboost-readme.txt000066400000000000000000000143521276770115000216730ustar00rootroot00000000000000dm-writeboost ============= dm-writeboost target provides block-level log-structured caching. All writes and reads are written to the caching device in sequential manner. Mechanism ========= Control three layers (RAM buffer, caching device and backing device) -------------------------------------------------------------------- dm-writeboost controls three different layers - RAM buffer (rambuf), caching device (cache_dev, e.g SSD) and backing device (backing_dev, e.g. HDD). All data are first stored in the RAM buffer and when the RAM buffer is full, dm-writeboost adds metadata block (with checksum) on the RAM buffer to create a "log". Afterward, the log is written to the caching device sequentially by a background thread and thereafter written back to the backing device in the background as well. dm-writeboost vs dm-cache or bcache =================================== How dm-writeboost differs from other existing SSD-caching drivers? The most distinctive point is that dm-writeboost writes to caching device the least frequently. Because it creates a log that's contains 127 writes before it actually writes the log to the caching device, writing to the caching device happens only once in 127 writes while other caching drivers writes more often. Since SSD's lifetime decreases as it experiences writes, users can reduce the risk of SSD disorder. dm-writeboost performs very much efficient than other caching solutions in small random pattern. But since it always split the requests into 4KB chunks, it may not be the best when the ave. I/O size is very large in your workload. However, if the splitting overhead aside, dm-writeboost is always the best of all because it caches data in sequential manner - the most efficient I/O pattern yet for the SSD caching device in terms of performance. It's known from experiments that dm-writeboost performs no good when you create a dm-writeboost'd device in virtual environment like KVM. So, keep in mind to use this driver in a physical machine. How To Use dm-writeboost ======================== Trigger caching device reformat ------------------------------- The caching device is triggered reformating only if the first one sector of the caching device is zeroed out. Note that this operation should be omitted when you resume the caching device. e.g. dd if=/dev/zero of=/dev/mapper/wbdev oflag=direct bs=512 count=1 Construct dm-writeboost'd device -------------------------------- You can construct dm-writeboost'd device with dmsetup create command. <#optional args> - <#optional args> is twice the length of the following list. - is unordered list of key-value pairs. backing_dev : A block device having original data (e.g. HDD) cache_dev : A block device having caches (e.g. SSD) see `Optional args` e.g. BACKING=/dev/sdb # example CACHE=/dev/sdc # example sz=`blockdev --getsize ${BACKING}` dmsetup create wbdev --table "0 $sz writeboost $BACKING $CACHE" dmsetup create wbdev --table "0 $sz writeboost $BACKING $CACHE 2 writeback_threshold 70" Shut down the system -------------------- On shutting down the system, you don't need to do anything at all. The data and metadata is safely saved on the caching device. But, if you want to do deconstruct the device manually, use dmsetup remove. Resume after system reboot -------------------------- To resume your caching device of the on-disk state, run dmsetup create command with the same parameter but DO NOT zero out the first sector of the caching device. This replays the logs on the caching device to rebuild the internal data structures. Remove caching device --------------------- If you want to detach your caching device for some reasons (you don't like dm-writeboost anymore or you try to upgrade the caching device to a newly perchased device) the safest way to do this is clean the dirty data up from your caching device first and then deconstrust the dm-writeboost'd device. You can do this by first suspend/resuming the device to drop all transient data from RAM buffer and then sending drop_caches message to drop dirty cache blocks from the caching device. e.g. dmsetup suspend wbdev; dmsetup resume wbdev dmsetup message wbdev 0 drop_caches dmsetup remove wbdev Optional args ------------- writeback_threshold (%) accepts: 0..100 default: 0 (writeback disabled) Writeback can be suppressed when the load of backing device is higher than $writeback_threshold. nr_max_batched_writeback accepts: 1..32 default: 32 As optimization, dm-writeboost writes back $nr_max_batched_writeback segments simultaneously. The dirty caches in the segments are sorted in ascending order of the destination address and then written back. Setting large value can boost the writeback performance. update_sb_record_interval (sec) accepts: 0..3600 default: 0 (disabled) Update the superblock every $update_sb_record_interval second. 0 means disabled. Superblock memorizes the last segment ID that was written back. By enabling this, dm-writeboost in resuming can skip segments that's already written back and thus can shorten the resume time. sync_data_interval (sec) accepts: 0..3600 default: 0 (disabled) Sync all the volatile data every $sync_data_interval second. 0 means disabled. read_cache_threshold (int) accepts: 0..127 default: 0 (read caching disabled) More than $read_cache_threshold * 4KB consecutive reads won't be staged. write_around_mode (bool) accepts: 0..1 default: 0 By enabling this, dm-writeboost writes data directly to the backing device. Messages -------- You can change the behavior of dm-writeboost'd device by message. (1) Optional args The following optional args can be tuned online. e.g. dmsetup message wbdev 0 writeback_threshold 70 - writeback_threshold - nr_max_batched_writeback - update_sb_record_interval - sync_data_interval - read_cache_threshold (2) Others drop_caches Wait for all dirty data on the caching device to be written back to the backing device. This is interruptible. clear_stat Clear the statistic info (see `Status`). Status ------ <#optional args> dm-writeboost-2.2.6/src/000077500000000000000000000000001276770115000151155ustar00rootroot00000000000000dm-writeboost-2.2.6/src/Makefile000066400000000000000000000005121276770115000165530ustar00rootroot00000000000000KERNEL_SOURCE_VERSION ?= $(shell uname -r) KERNEL_TREE ?= /lib/modules/$(KERNEL_SOURCE_VERSION)/build obj-m := dm-writeboost.o dm-writeboost-objs := \ dm-writeboost-target.o \ dm-writeboost-metadata.o \ dm-writeboost-daemon.o all: $(MAKE) -C $(KERNEL_TREE) M=$(PWD) modules clean: $(MAKE) -C $(KERNEL_TREE) M=$(PWD) clean dm-writeboost-2.2.6/src/dkms.conf000066400000000000000000000003511276770115000167210ustar00rootroot00000000000000PACKAGE_NAME="dm-writeboost" PACKAGE_VERSION="2.2.6" BUILT_MODULE_NAME="dm-writeboost" DEST_MODULE_LOCATION="/kernel/drivers/md" MAKE="make all KERNEL_TREE=$kernel_source_dir" CLEAN="make clean" AUTOINSTALL="yes" REMAKE_INITRD="yes" dm-writeboost-2.2.6/src/dm-writeboost-daemon.c000066400000000000000000000341521276770115000213260ustar00rootroot00000000000000/* * This file is part of dm-writeboost * Copyright (C) 2012-2016 Akira Hayakawa * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #include "dm-writeboost.h" #include "dm-writeboost-metadata.h" #include "dm-writeboost-daemon.h" #include /*----------------------------------------------------------------------------*/ void queue_barrier_io(struct wb_device *wb, struct bio *bio) { mutex_lock(&wb->io_lock); bio_list_add(&wb->barrier_ios, bio); mutex_unlock(&wb->io_lock); /* * queue_work does nothing if the work is already in the queue. * So we don't have to care about it. */ queue_work(wb->barrier_wq, &wb->flush_barrier_work); } void flush_barrier_ios(struct work_struct *work) { struct wb_device *wb = container_of( work, struct wb_device, flush_barrier_work); if (bio_list_empty(&wb->barrier_ios)) return; atomic64_inc(&wb->count_non_full_flushed); flush_current_buffer(wb); } /*----------------------------------------------------------------------------*/ static void process_deferred_barriers(struct wb_device *wb, struct rambuffer *rambuf) { bool has_barrier = !bio_list_empty(&rambuf->barrier_ios); if (has_barrier) { struct bio *bio; /* Make all the preceding data persistent. */ int err = blkdev_issue_flush(wb->cache_dev->bdev, GFP_NOIO, NULL); /* Ack the chained barrier requests. */ while ((bio = bio_list_pop(&rambuf->barrier_ios))) bio_endio_compat(bio, err); } } static bool should_flush(struct wb_device *wb) { return atomic64_read(&wb->last_queued_segment_id) > atomic64_read(&wb->last_flushed_segment_id); } static void do_flush_proc(struct wb_device *wb) { struct segment_header *seg; struct rambuffer *rambuf; u64 id; struct dm_io_request io_req; struct dm_io_region region; if (!should_flush(wb)) { schedule_timeout_interruptible(msecs_to_jiffies(1000)); return; } id = atomic64_read(&wb->last_flushed_segment_id) + 1; smp_rmb(); rambuf = get_rambuffer_by_id(wb, id); seg = rambuf->seg; io_req = (struct dm_io_request) { WB_IO_WRITE, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_VMA, .mem.ptr.addr = rambuf->data, }; region = (struct dm_io_region) { .bdev = wb->cache_dev->bdev, .sector = seg->start_sector, .count = (seg->length + 1) << 3, }; if (wb_io(&io_req, 1, ®ion, NULL, false)) return; /* * Deferred ACK for barrier requests * To serialize barrier ACK in logging we wait for the previous segment * to be persistently written (if needed). */ process_deferred_barriers(wb, rambuf); /* * We can count up the last_flushed_segment_id only after segment * is written persistently. Counting up the id is serialized. */ smp_wmb(); atomic64_inc(&wb->last_flushed_segment_id); wake_up(&wb->flush_wait_queue); } int flush_daemon_proc(void *data) { struct wb_device *wb = data; while (!kthread_should_stop()) do_flush_proc(wb); return 0; } void wait_for_flushing(struct wb_device *wb, u64 id) { wait_event(wb->flush_wait_queue, atomic64_read(&wb->last_flushed_segment_id) >= id); smp_rmb(); } /*----------------------------------------------------------------------------*/ static void writeback_endio(unsigned long error, void *context) { struct wb_device *wb = context; if (error) atomic_inc(&wb->writeback_fail_count); if (atomic_dec_and_test(&wb->writeback_io_count)) wake_up(&wb->writeback_io_wait_queue); } static void submit_writeback_io(struct wb_device *wb, struct writeback_io *writeback_io) { ASSERT(writeback_io->data_bits > 0); if (writeback_io->data_bits == 255) { struct dm_io_request io_req_w = { WB_IO_WRITE, .client = wb->io_client, .notify.fn = writeback_endio, .notify.context = wb, .mem.type = DM_IO_VMA, .mem.ptr.addr = writeback_io->data, }; struct dm_io_region region_w = { .bdev = wb->backing_dev->bdev, .sector = writeback_io->sector, .count = 1 << 3, }; if (wb_io(&io_req_w, 1, ®ion_w, NULL, false)) writeback_endio(1, wb); } else { u8 i; for (i = 0; i < 8; i++) { struct dm_io_request io_req_w; struct dm_io_region region_w; bool bit_on = writeback_io->data_bits & (1 << i); if (!bit_on) continue; io_req_w = (struct dm_io_request) { WB_IO_WRITE, .client = wb->io_client, .notify.fn = writeback_endio, .notify.context = wb, .mem.type = DM_IO_VMA, .mem.ptr.addr = writeback_io->data + (i << 9), }; region_w = (struct dm_io_region) { .bdev = wb->backing_dev->bdev, .sector = writeback_io->sector + i, .count = 1, }; if (wb_io(&io_req_w, 1, ®ion_w, NULL, false)) writeback_endio(1, wb); } } } static void submit_writeback_ios(struct wb_device *wb) { struct blk_plug plug; struct rb_root wt = wb->writeback_tree; blk_start_plug(&plug); while (!RB_EMPTY_ROOT(&wt)) { struct writeback_io *writeback_io = writeback_io_from_node(rb_first(&wt)); rb_erase(&writeback_io->rb_node, &wt); submit_writeback_io(wb, writeback_io); } blk_finish_plug(&plug); } /* * Compare two writeback IOs * If the two have the same sector then compare them with the IDs. * We process the older ID first and then overwrites with the older. * * (10, 3) < (11, 1) * (10, 3) < (10, 4) */ static bool compare_writeback_io(struct writeback_io *a, struct writeback_io *b) { ASSERT(a); ASSERT(b); if (a->sector < b->sector) return true; if (a->id < b->id) return true; return false; } static void inc_writeback_io_count(u8 data_bits, size_t *writeback_io_count) { if (data_bits == 255) { (*writeback_io_count)++; } else { u8 i; for (i = 0; i < 8; i++) { if (data_bits & (1 << i)) (*writeback_io_count)++; } } } /* * Add writeback IO to RB-tree for sorted writeback. * All writeback IOs are sorted in ascending order. */ static void add_writeback_io(struct wb_device *wb, struct writeback_io *writeback_io) { struct rb_node **rbp, *parent; rbp = &wb->writeback_tree.rb_node; parent = NULL; while (*rbp) { struct writeback_io *parent_io; parent = *rbp; parent_io = writeback_io_from_node(parent); if (compare_writeback_io(writeback_io, parent_io)) rbp = &(*rbp)->rb_left; else rbp = &(*rbp)->rb_right; } rb_link_node(&writeback_io->rb_node, parent, rbp); rb_insert_color(&writeback_io->rb_node, &wb->writeback_tree); } static int fill_writeback_seg(struct wb_device *wb, struct writeback_segment *writeback_seg) { struct segment_header *seg = writeback_seg->seg; struct dm_io_request io_req_r = { WB_IO_READ, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_VMA, .mem.ptr.addr = writeback_seg->buf, }; struct dm_io_region region_r = { .bdev = wb->cache_dev->bdev, .sector = seg->start_sector + (1 << 3), /* Header excluded */ .count = seg->length << 3, }; /* * dm_io() allows region.count = 0 * so we don't need to skip here in case of seg->length = 0 */ return wb_io(&io_req_r, 1, ®ion_r, NULL, false); } static void prepare_writeback_ios(struct wb_device *wb, struct writeback_segment *writeback_seg, size_t *writeback_io_count) { struct segment_header *seg = writeback_seg->seg; u8 i; for (i = 0; i < seg->length; i++) { struct writeback_io *writeback_io; struct metablock *mb = seg->mb_array + i; struct dirtiness dirtiness = read_mb_dirtiness(wb, seg, mb); ASSERT(dirtiness.data_bits > 0); if (!dirtiness.is_dirty) continue; writeback_io = writeback_seg->ios + i; writeback_io->sector = mb->sector; writeback_io->id = seg->id; /* writeback_io->data is already set */ writeback_io->data_bits = dirtiness.data_bits; inc_writeback_io_count(writeback_io->data_bits, writeback_io_count); add_writeback_io(wb, writeback_io); } } void mark_clean_seg(struct wb_device *wb, struct segment_header *seg) { u8 i; for (i = 0; i < seg->length; i++) { struct metablock *mb = seg->mb_array + i; if (mark_clean_mb(wb, mb)) dec_nr_dirty_caches(wb); } } /* * Try writeback some specified segs and returns if all writeback ios succeeded. */ static bool try_writeback_segs(struct wb_device *wb) { struct writeback_segment *writeback_seg; size_t writeback_io_count = 0; u32 k; /* Create RB-tree */ wb->writeback_tree = RB_ROOT; for (k = 0; k < wb->nr_cur_batched_writeback; k++) { writeback_seg = *(wb->writeback_segs + k); if (fill_writeback_seg(wb, writeback_seg)) return false; prepare_writeback_ios(wb, writeback_seg, &writeback_io_count); } atomic_set(&wb->writeback_io_count, writeback_io_count); atomic_set(&wb->writeback_fail_count, 0); /* Pop rbnodes out of the tree and submit writeback I/Os */ submit_writeback_ios(wb); wait_event(wb->writeback_io_wait_queue, !atomic_read(&wb->writeback_io_count)); return atomic_read(&wb->writeback_fail_count) == 0; } static bool do_writeback_segs(struct wb_device *wb) { if (!try_writeback_segs(wb)) return false; blkdev_issue_flush(wb->backing_dev->bdev, GFP_NOIO, NULL); return true; } /* * Calculate the number of segments to write back. */ void update_nr_empty_segs(struct wb_device *wb) { wb->nr_empty_segs = atomic64_read(&wb->last_writeback_segment_id) + wb->nr_segments - wb->current_seg->id; } static u32 calc_nr_writeback(struct wb_device *wb) { u32 nr_writeback_candidates = atomic64_read(&wb->last_flushed_segment_id) - atomic64_read(&wb->last_writeback_segment_id); u32 nr_max_batch = ACCESS_ONCE(wb->nr_max_batched_writeback); if (wb->nr_writeback_segs != nr_max_batch) try_alloc_writeback_ios(wb, nr_max_batch, GFP_NOIO | __GFP_NOWARN); return min3(nr_writeback_candidates, wb->nr_writeback_segs, wb->nr_empty_segs + 1); } static bool should_writeback(struct wb_device *wb) { return ACCESS_ONCE(wb->allow_writeback) || ACCESS_ONCE(wb->urge_writeback) || ACCESS_ONCE(wb->force_drop); } static void do_writeback_proc(struct wb_device *wb) { u32 k, nr_writeback_tbd; if (!should_writeback(wb)) { schedule_timeout_interruptible(msecs_to_jiffies(1000)); return; } nr_writeback_tbd = calc_nr_writeback(wb); if (!nr_writeback_tbd) { schedule_timeout_interruptible(msecs_to_jiffies(1000)); return; } smp_rmb(); /* Store segments into writeback_segs */ for (k = 0; k < nr_writeback_tbd; k++) { struct writeback_segment *writeback_seg = *(wb->writeback_segs + k); writeback_seg->seg = get_segment_header_by_id(wb, atomic64_read(&wb->last_writeback_segment_id) + 1 + k); } wb->nr_cur_batched_writeback = nr_writeback_tbd; if (!do_writeback_segs(wb)) return; /* A segment after written back is clean */ for (k = 0; k < wb->nr_cur_batched_writeback; k++) { struct writeback_segment *writeback_seg = *(wb->writeback_segs + k); mark_clean_seg(wb, writeback_seg->seg); } smp_wmb(); atomic64_add(wb->nr_cur_batched_writeback, &wb->last_writeback_segment_id); wake_up(&wb->writeback_wait_queue); } int writeback_daemon_proc(void *data) { struct wb_device *wb = data; while (!kthread_should_stop()) do_writeback_proc(wb); return 0; } /* * Wait for a segment to be written back. * The segment after written back is clean. */ void wait_for_writeback(struct wb_device *wb, u64 id) { wb->urge_writeback = true; wake_up_process(wb->writeback_daemon); wait_event(wb->writeback_wait_queue, atomic64_read(&wb->last_writeback_segment_id) >= id); smp_rmb(); wb->urge_writeback = false; } /*----------------------------------------------------------------------------*/ int writeback_modulator_proc(void *data) { struct wb_device *wb = data; struct hd_struct *hd = wb->backing_dev->bdev->bd_part; unsigned long old = 0, new, util; unsigned long intvl = 1000; while (!kthread_should_stop()) { new = jiffies_to_msecs(part_stat_read(hd, io_ticks)); util = div_u64(100 * (new - old), 1000); if (util < ACCESS_ONCE(wb->writeback_threshold)) wb->allow_writeback = true; else wb->allow_writeback = false; old = new; update_nr_empty_segs(wb); schedule_timeout_interruptible(msecs_to_jiffies(intvl)); } return 0; } /*----------------------------------------------------------------------------*/ static void update_superblock_record(struct wb_device *wb) { struct superblock_record_device o; void *buf; struct dm_io_request io_req; struct dm_io_region region; o.last_writeback_segment_id = cpu_to_le64(atomic64_read(&wb->last_writeback_segment_id)); buf = mempool_alloc(wb->buf_1_pool, GFP_NOIO); memset(buf, 0, 1 << 9); memcpy(buf, &o, sizeof(o)); io_req = (struct dm_io_request) { WB_IO_WRITE_FUA, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_KMEM, .mem.ptr.addr = buf, }; region = (struct dm_io_region) { .bdev = wb->cache_dev->bdev, .sector = (1 << 11) - 1, .count = 1, }; wb_io(&io_req, 1, ®ion, NULL, false); mempool_free(buf, wb->buf_1_pool); } int sb_record_updater_proc(void *data) { struct wb_device *wb = data; unsigned long intvl; while (!kthread_should_stop()) { /* sec -> ms */ intvl = ACCESS_ONCE(wb->update_sb_record_interval) * 1000; if (!intvl) { schedule_timeout_interruptible(msecs_to_jiffies(1000)); continue; } update_superblock_record(wb); schedule_timeout_interruptible(msecs_to_jiffies(intvl)); } return 0; } /*----------------------------------------------------------------------------*/ int data_synchronizer_proc(void *data) { struct wb_device *wb = data; unsigned long intvl; while (!kthread_should_stop()) { /* sec -> ms */ intvl = ACCESS_ONCE(wb->sync_data_interval) * 1000; if (!intvl) { schedule_timeout_interruptible(msecs_to_jiffies(1000)); continue; } flush_current_buffer(wb); blkdev_issue_flush(wb->cache_dev->bdev, GFP_NOIO, NULL); schedule_timeout_interruptible(msecs_to_jiffies(intvl)); } return 0; } dm-writeboost-2.2.6/src/dm-writeboost-daemon.h000066400000000000000000000036741276770115000213400ustar00rootroot00000000000000/* * This file is part of dm-writeboost * Copyright (C) 2012-2016 Akira Hayakawa * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #ifndef DM_WRITEBOOST_DAEMON_H #define DM_WRITEBOOST_DAEMON_H /*----------------------------------------------------------------------------*/ int flush_daemon_proc(void *); void wait_for_flushing(struct wb_device *, u64 id); /*----------------------------------------------------------------------------*/ void queue_barrier_io(struct wb_device *, struct bio *); void flush_barrier_ios(struct work_struct *); /*----------------------------------------------------------------------------*/ void update_nr_empty_segs(struct wb_device *); int writeback_daemon_proc(void *); void wait_for_writeback(struct wb_device *, u64 id); void mark_clean_seg(struct wb_device *, struct segment_header *seg); /*----------------------------------------------------------------------------*/ int writeback_modulator_proc(void *); /*----------------------------------------------------------------------------*/ int data_synchronizer_proc(void *); /*----------------------------------------------------------------------------*/ int sb_record_updater_proc(void *); /*----------------------------------------------------------------------------*/ #endif dm-writeboost-2.2.6/src/dm-writeboost-metadata.c000066400000000000000000000762471276770115000216560ustar00rootroot00000000000000/* * This file is part of dm-writeboost * Copyright (C) 2012-2016 Akira Hayakawa * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #include "dm-writeboost.h" #include "dm-writeboost-metadata.h" #include "dm-writeboost-daemon.h" /*----------------------------------------------------------------------------*/ struct large_array { u64 nr_elems; u32 elemsize; void *data; }; static struct large_array *large_array_alloc(u32 elemsize, u64 nr_elems) { struct large_array *arr = kmalloc(sizeof(*arr), GFP_KERNEL); if (!arr) { DMERR("Failed to allocate arr"); return NULL; } arr->elemsize = elemsize; arr->nr_elems = nr_elems; arr->data = vmalloc(elemsize * nr_elems); if (!arr->data) { DMERR("Failed to allocate data"); goto bad_alloc_data; } return arr; bad_alloc_data: kfree(arr); return NULL; } static void large_array_free(struct large_array *arr) { vfree(arr->data); kfree(arr); } static void *large_array_at(struct large_array *arr, u64 i) { return arr->data + arr->elemsize * i; } /*----------------------------------------------------------------------------*/ /* * Get the in-core metablock of the given index. */ static struct metablock *mb_at(struct wb_device *wb, u32 idx) { u32 idx_inseg; u32 seg_idx = div_u64_rem(idx, wb->nr_caches_inseg, &idx_inseg); struct segment_header *seg = large_array_at(wb->segment_header_array, seg_idx); return seg->mb_array + idx_inseg; } static void mb_array_empty_init(struct wb_device *wb) { u32 i; for (i = 0; i < wb->nr_caches; i++) { struct metablock *mb = mb_at(wb, i); INIT_HLIST_NODE(&mb->ht_list); mb->idx = i; mb->dirtiness.data_bits = 0; mb->dirtiness.is_dirty = false; } } /* * Calc the starting sector of the k-th segment */ static sector_t calc_segment_header_start(struct wb_device *wb, u32 k) { return (1 << 11) + (1 << SEGMENT_SIZE_ORDER) * k; } static u32 calc_nr_segments(struct dm_dev *dev, struct wb_device *wb) { sector_t devsize = dm_devsize(dev); return div_u64(devsize - (1 << 11), 1 << SEGMENT_SIZE_ORDER); } /* * Get the relative index in a segment of the mb_idx-th metablock */ u8 mb_idx_inseg(struct wb_device *wb, u32 mb_idx) { u32 tmp32; div_u64_rem(mb_idx, wb->nr_caches_inseg, &tmp32); return tmp32; } /* * Calc the starting sector of the mb_idx-th cache block */ sector_t calc_mb_start_sector(struct wb_device *wb, struct segment_header *seg, u32 mb_idx) { return seg->start_sector + ((1 + mb_idx_inseg(wb, mb_idx)) << 3); } /* * Get the segment that contains the passed mb */ struct segment_header *mb_to_seg(struct wb_device *wb, struct metablock *mb) { struct segment_header *seg; seg = ((void *) mb) - mb_idx_inseg(wb, mb->idx) * sizeof(struct metablock) - sizeof(struct segment_header); return seg; } bool is_on_buffer(struct wb_device *wb, u32 mb_idx) { u32 start = wb->current_seg->start_idx; if (mb_idx < start) return false; if (mb_idx >= (start + wb->nr_caches_inseg)) return false; return true; } static u32 segment_id_to_idx(struct wb_device *wb, u64 id) { u32 idx; div_u64_rem(id - 1, wb->nr_segments, &idx); return idx; } static struct segment_header *segment_at(struct wb_device *wb, u32 k) { return large_array_at(wb->segment_header_array, k); } /* * Get the segment from the segment id. * The index of the segment is calculated from the segment id. */ struct segment_header *get_segment_header_by_id(struct wb_device *wb, u64 id) { return segment_at(wb, segment_id_to_idx(wb, id)); } /*----------------------------------------------------------------------------*/ static int init_segment_header_array(struct wb_device *wb) { u32 segment_idx; wb->segment_header_array = large_array_alloc( sizeof(struct segment_header) + sizeof(struct metablock) * wb->nr_caches_inseg, wb->nr_segments); if (!wb->segment_header_array) { DMERR("Failed to allocate segment_header_array"); return -ENOMEM; } for (segment_idx = 0; segment_idx < wb->nr_segments; segment_idx++) { struct segment_header *seg = large_array_at(wb->segment_header_array, segment_idx); seg->id = 0; seg->length = 0; atomic_set(&seg->nr_inflight_ios, 0); /* Const values */ seg->start_idx = wb->nr_caches_inseg * segment_idx; seg->start_sector = calc_segment_header_start(wb, segment_idx); } mb_array_empty_init(wb); return 0; } static void free_segment_header_array(struct wb_device *wb) { large_array_free(wb->segment_header_array); } /*----------------------------------------------------------------------------*/ struct ht_head { struct hlist_head ht_list; }; static int ht_empty_init(struct wb_device *wb) { u32 idx; size_t i, nr_heads; struct large_array *arr; wb->htsize = wb->nr_caches; nr_heads = wb->htsize + 1; arr = large_array_alloc(sizeof(struct ht_head), nr_heads); if (!arr) { DMERR("Failed to allocate htable"); return -ENOMEM; } wb->htable = arr; for (i = 0; i < nr_heads; i++) { struct ht_head *hd = large_array_at(arr, i); INIT_HLIST_HEAD(&hd->ht_list); } wb->null_head = large_array_at(wb->htable, wb->htsize); for (idx = 0; idx < wb->nr_caches; idx++) { struct metablock *mb = mb_at(wb, idx); hlist_add_head(&mb->ht_list, &wb->null_head->ht_list); } return 0; } static void free_ht(struct wb_device *wb) { large_array_free(wb->htable); } struct ht_head *ht_get_head(struct wb_device *wb, struct lookup_key *key) { u32 idx; div_u64_rem(key->sector >> 3, wb->htsize, &idx); return large_array_at(wb->htable, idx); } static bool mb_hit(struct metablock *mb, struct lookup_key *key) { return mb->sector == key->sector; } /* * Remove the metablock from the hashtable and link the orphan to the null head. */ void ht_del(struct wb_device *wb, struct metablock *mb) { struct ht_head *null_head; hlist_del(&mb->ht_list); null_head = wb->null_head; hlist_add_head(&mb->ht_list, &null_head->ht_list); } void ht_register(struct wb_device *wb, struct ht_head *head, struct metablock *mb, struct lookup_key *key) { hlist_del(&mb->ht_list); hlist_add_head(&mb->ht_list, &head->ht_list); BUG_ON(key->sector & 7); // should be 4KB aligned mb->sector = key->sector; }; struct metablock *ht_lookup(struct wb_device *wb, struct ht_head *head, struct lookup_key *key) { struct metablock *mb, *found = NULL; hlist_for_each_entry(mb, &head->ht_list, ht_list) { if (mb_hit(mb, key)) { found = mb; break; } } return found; } /* * Remove all the metablock in the segment from the lookup table. */ void discard_caches_inseg(struct wb_device *wb, struct segment_header *seg) { u8 i; for (i = 0; i < wb->nr_caches_inseg; i++) { struct metablock *mb = seg->mb_array + i; ht_del(wb, mb); } } /*----------------------------------------------------------------------------*/ static int read_superblock_header(struct superblock_header_device *sup, struct wb_device *wb) { int err = 0; struct dm_io_request io_req_sup; struct dm_io_region region_sup; void *buf = mempool_alloc(wb->buf_1_pool, GFP_KERNEL); if (!buf) return -ENOMEM; check_buffer_alignment(buf); io_req_sup = (struct dm_io_request) { WB_IO_READ, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_KMEM, .mem.ptr.addr = buf, }; region_sup = (struct dm_io_region) { .bdev = wb->cache_dev->bdev, .sector = 0, .count = 1, }; err = wb_io(&io_req_sup, 1, ®ion_sup, NULL, false); if (err) goto bad_io; memcpy(sup, buf, sizeof(*sup)); bad_io: mempool_free(buf, wb->buf_1_pool); return err; } /* * check if the cache device is already formatted. * returns 0 iff this routine runs without failure. */ static int audit_cache_device(struct wb_device *wb) { int err = 0; struct superblock_header_device sup; err = read_superblock_header(&sup, wb); if (err) { DMERR("read_superblock_header failed"); return err; } wb->do_format = false; if (le32_to_cpu(sup.magic) != WB_MAGIC || wb->write_around_mode) { /* write-around mode should discard all caches */ wb->do_format = true; DMERR("Superblock Header: Magic number invalid"); return 0; } return err; } static int format_superblock_header(struct wb_device *wb) { int err = 0; struct dm_io_request io_req_sup; struct dm_io_region region_sup; struct superblock_header_device sup = { .magic = cpu_to_le32(WB_MAGIC), }; void *buf = mempool_alloc(wb->buf_1_pool, GFP_KERNEL); if (!buf) return -ENOMEM; memcpy(buf, &sup, sizeof(sup)); io_req_sup = (struct dm_io_request) { WB_IO_WRITE_FUA, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_KMEM, .mem.ptr.addr = buf, }; region_sup = (struct dm_io_region) { .bdev = wb->cache_dev->bdev, .sector = 0, .count = 1, }; err = wb_io(&io_req_sup, 1, ®ion_sup, NULL, false); if (err) goto bad_io; bad_io: mempool_free(buf, wb->buf_1_pool); return err; } struct format_segmd_context { int err; atomic64_t count; }; static void format_segmd_endio(unsigned long error, void *__context) { struct format_segmd_context *context = __context; if (error) context->err = 1; atomic64_dec(&context->count); } struct zeroing_context { int error; struct completion complete; }; static void zeroing_complete(int read_err, unsigned long write_err, void *context) { struct zeroing_context *zc = context; if (read_err || write_err) zc->error = -EIO; complete(&zc->complete); } /* * Synchronously zeroes out a region on a device. */ static int do_zeroing_region(struct wb_device *wb, struct dm_io_region *region) { int err; struct zeroing_context zc; zc.error = 0; init_completion(&zc.complete); err = dm_kcopyd_zero(wb->copier, 1, region, 0, zeroing_complete, &zc); if (err) return err; wait_for_completion(&zc.complete); return zc.error; } static int zeroing_full_superblock(struct wb_device *wb) { struct dm_io_region region = { .bdev = wb->cache_dev->bdev, .sector = 0, .count = 1 << 11, }; return do_zeroing_region(wb, ®ion); } static int format_all_segment_headers(struct wb_device *wb) { int err = 0; struct dm_dev *dev = wb->cache_dev; u32 i; struct format_segmd_context context; void *buf = mempool_alloc(wb->buf_8_pool, GFP_KERNEL); if (!buf) return -ENOMEM; memset(buf, 0, 1 << 12); check_buffer_alignment(buf); atomic64_set(&context.count, wb->nr_segments); context.err = 0; /* Submit all the writes asynchronously. */ for (i = 0; i < wb->nr_segments; i++) { struct dm_io_request io_req_seg = { WB_IO_WRITE, .client = wb->io_client, .notify.fn = format_segmd_endio, .notify.context = &context, .mem.type = DM_IO_KMEM, .mem.ptr.addr = buf, }; struct dm_io_region region_seg = { .bdev = dev->bdev, .sector = calc_segment_header_start(wb, i), .count = (1 << 3), }; err = wb_io(&io_req_seg, 1, ®ion_seg, NULL, false); if (err) break; } if (err) goto bad; /* Wait for all the writes complete. */ while (atomic64_read(&context.count)) schedule_timeout_interruptible(msecs_to_jiffies(100)); if (context.err) { DMERR("I/O failed"); err = -EIO; goto bad; } err = blkdev_issue_flush(dev->bdev, GFP_KERNEL, NULL); bad: mempool_free(buf, wb->buf_8_pool); return err; } /* * Format superblock header and all the segment headers in a cache device */ static int format_cache_device(struct wb_device *wb) { int err = zeroing_full_superblock(wb); if (err) { DMERR("zeroing_full_superblock failed"); return err; } err = format_all_segment_headers(wb); if (err) { DMERR("format_all_segment_headers failed"); return err; } err = format_superblock_header(wb); /* First 512B */ if (err) { DMERR("format_superblock_header failed"); return err; } return err; } /* * First check if the superblock and the passed arguments are consistent and * re-format the cache structure if they are not. * If you want to re-format the cache device you must zeroes out the first one * sector of the device. */ static int might_format_cache_device(struct wb_device *wb) { int err = 0; err = audit_cache_device(wb); if (err) { DMERR("audit_cache_device failed"); return err; } if (wb->do_format) { err = format_cache_device(wb); if (err) { DMERR("format_cache_device failed"); return err; } } return err; } /*----------------------------------------------------------------------------*/ static int init_rambuf_pool(struct wb_device *wb) { int err = 0; size_t i; wb->rambuf_pool = kmalloc(sizeof(struct rambuffer) * NR_RAMBUF_POOL, GFP_KERNEL); if (!wb->rambuf_pool) return -ENOMEM; for (i = 0; i < NR_RAMBUF_POOL; i++) { void *alloced = vmalloc(1 << (SEGMENT_SIZE_ORDER + 9)); if (!alloced) { size_t j; DMERR("Failed to allocate rambuf->data"); for (j = 0; j < i; j++) { vfree(wb->rambuf_pool[j].data); } err = -ENOMEM; goto bad_alloc_data; } wb->rambuf_pool[i].data = alloced; } return err; bad_alloc_data: kfree(wb->rambuf_pool); return err; } static void free_rambuf_pool(struct wb_device *wb) { size_t i; for (i = 0; i < NR_RAMBUF_POOL; i++) vfree(wb->rambuf_pool[i].data); kfree(wb->rambuf_pool); } struct rambuffer *get_rambuffer_by_id(struct wb_device *wb, u64 id) { u32 tmp32; div_u64_rem(id - 1, NR_RAMBUF_POOL, &tmp32); return wb->rambuf_pool + tmp32; } /*----------------------------------------------------------------------------*/ /* * Initialize core devices * - Cache device (SSD) * - RAM buffers (DRAM) */ static int init_devices(struct wb_device *wb) { int err = 0; err = might_format_cache_device(wb); if (err) return err; err = init_rambuf_pool(wb); if (err) { DMERR("init_rambuf_pool failed"); return err; } return err; } static void free_devices(struct wb_device *wb) { free_rambuf_pool(wb); } /*----------------------------------------------------------------------------*/ static int read_superblock_record(struct superblock_record_device *record, struct wb_device *wb) { int err = 0; struct dm_io_request io_req; struct dm_io_region region; void *buf = mempool_alloc(wb->buf_1_pool, GFP_KERNEL); if (!buf) return -ENOMEM; check_buffer_alignment(buf); io_req = (struct dm_io_request) { WB_IO_READ, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_KMEM, .mem.ptr.addr = buf, }; region = (struct dm_io_region) { .bdev = wb->cache_dev->bdev, .sector = (1 << 11) - 1, .count = 1, }; err = wb_io(&io_req, 1, ®ion, NULL, false); if (err) goto bad_io; memcpy(record, buf, sizeof(*record)); bad_io: mempool_free(buf, wb->buf_1_pool); return err; } /* * Read out whole segment of @seg to a pre-allocated @buf */ static int read_whole_segment(void *buf, struct wb_device *wb, struct segment_header *seg) { struct dm_io_request io_req = { WB_IO_READ, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_VMA, .mem.ptr.addr = buf, }; struct dm_io_region region = { .bdev = wb->cache_dev->bdev, .sector = seg->start_sector, .count = 1 << SEGMENT_SIZE_ORDER, }; return wb_io(&io_req, 1, ®ion, NULL, false); } /* * We make a checksum of a segment from the valid data in a segment except the * first 1 sector. */ u32 calc_checksum(void *rambuffer, u8 length) { unsigned int len = (4096 - 512) + 4096 * length; return ~crc32c(0xffffffff, rambuffer + 512, len); } void prepare_segment_header_device(void *rambuffer, struct wb_device *wb, struct segment_header *src) { struct segment_header_device *dest = rambuffer; u32 i; ASSERT((src->length) == (wb->cursor - src->start_idx)); for (i = 0; i < src->length; i++) { struct metablock *mb = src->mb_array + i; struct metablock_device *mbdev = dest->mbarr + i; mbdev->sector = cpu_to_le64((u64)mb->sector); mbdev->dirty_bits = mb->dirtiness.is_dirty ? mb->dirtiness.data_bits : 0; } dest->id = cpu_to_le64(src->id); dest->length = src->length; dest->checksum = cpu_to_le32(calc_checksum(rambuffer, src->length)); } /*----------------------------------------------------------------------------*/ /* * Apply @i-th metablock in @src to @seg */ static int apply_metablock_device(struct wb_device *wb, struct segment_header *seg, struct segment_header_device *src, u8 i) { struct lookup_key key; struct ht_head *head; struct metablock *found = NULL, *mb = seg->mb_array + i; struct metablock_device *mbdev = src->mbarr + i; mb->sector = le64_to_cpu(mbdev->sector); mb->dirtiness.data_bits = mbdev->dirty_bits ? mbdev->dirty_bits : 255; mb->dirtiness.is_dirty = mbdev->dirty_bits ? true : false; key = (struct lookup_key) { .sector = mb->sector, }; head = ht_get_head(wb, &key); found = ht_lookup(wb, head, &key); if (found) { int err = 0; u8 i; struct write_io wio; void *buf = mempool_alloc(wb->buf_8_pool, GFP_KERNEL); if (!buf) return -ENOMEM; wio = (struct write_io) { .data = buf, .data_bits = 0, }; err = prepare_overwrite(wb, mb_to_seg(wb, found), found, &wio, mb->dirtiness.data_bits); if (err) goto fail_out; for (i = 0; i < 8; i++) { struct dm_io_request io_req; struct dm_io_region region; if (!(wio.data_bits & (1 << i))) continue; io_req = (struct dm_io_request) { WB_IO_WRITE, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_KMEM, .mem.ptr.addr = wio.data + (i << 9), }; region = (struct dm_io_region) { .bdev = wb->backing_dev->bdev, .sector = mb->sector + i, .count = 1, }; err = wb_io(&io_req, 1, ®ion, NULL, true); if (err) break; } fail_out: mempool_free(buf, wb->buf_8_pool); if (err) return err; } ht_register(wb, head, mb, &key); if (mb->dirtiness.is_dirty) inc_nr_dirty_caches(wb); return 0; } static int apply_segment_header_device(struct wb_device *wb, struct segment_header *seg, struct segment_header_device *src) { int err = 0; u8 i; seg->length = src->length; for (i = 0; i < src->length; i++) { err = apply_metablock_device(wb, seg, src, i); if (err) break; } return err; } /* * Read out only segment header (4KB) of @seg to @buf */ static int read_segment_header(void *buf, struct wb_device *wb, struct segment_header *seg) { struct dm_io_request io_req = { WB_IO_READ, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_KMEM, .mem.ptr.addr = buf, }; struct dm_io_region region = { .bdev = wb->cache_dev->bdev, .sector = seg->start_sector, .count = 8, }; return wb_io(&io_req, 1, ®ion, NULL, false); } /* * Find the max id from all the segment headers * @max_id (out) : The max id found */ static int do_find_max_id(struct wb_device *wb, u64 *max_id) { int err = 0; u32 k; void *buf = mempool_alloc(wb->buf_8_pool, GFP_KERNEL); if (!buf) return -ENOMEM; check_buffer_alignment(buf); *max_id = 0; for (k = 0; k < wb->nr_segments; k++) { struct segment_header *seg = segment_at(wb, k); struct segment_header_device *header; err = read_segment_header(buf, wb, seg); if (err) goto out; header = buf; if (le64_to_cpu(header->id) > *max_id) *max_id = le64_to_cpu(header->id); } out: mempool_free(buf, wb->buf_8_pool); return err; } static int find_max_id(struct wb_device *wb, u64 *max_id) { /* * Fast path. * If it's the first creation, we don't need to look over * the segment headers to know that the max_id is zero. */ if (wb->do_format) { *max_id = 0; return 0; } return do_find_max_id(wb, max_id); } /* * Iterate over the logs on the cache device and apply (recover the cache metadata) * valid (checksum is correct) segments. * A segment is valid means that the segment was written without any failure * typically due to unexpected power failure. * * @max_id (in/out) * - in : The max id found in find_max_id() * - out : The last id applied in this function */ static int do_apply_valid_segments(struct wb_device *wb, u64 *max_id) { int err = 0; struct segment_header *seg; struct segment_header_device *header; u32 i, start_idx; void *rambuf = vmalloc(1 << (SEGMENT_SIZE_ORDER + 9)); if (!rambuf) return -ENOMEM; /* * We are starting from the segment next to the newest one, which can * be the oldest. The id can be zero if the logs didn't lap at all. */ start_idx = segment_id_to_idx(wb, *max_id + 1); *max_id = 0; for (i = start_idx; i < (start_idx + wb->nr_segments); i++) { u32 actual, expected, k; div_u64_rem(i, wb->nr_segments, &k); seg = segment_at(wb, k); err = read_whole_segment(rambuf, wb, seg); if (err) break; header = rambuf; /* * We can't break here. * Consider sequence of id [1,2,3,0,0,0] * The max_id is 3 and we start from the 4th segment. * If we break, the valid logs (1,2,3) are ignored. */ if (!le64_to_cpu(header->id)) continue; /* * Compare the checksum * if they don't match we discard the subsequent logs. */ actual = calc_checksum(rambuf, header->length); expected = le32_to_cpu(header->checksum); if (actual != expected) { DMWARN("Checksum incorrect id:%llu checksum: %u != %u", (long long unsigned int) le64_to_cpu(header->id), actual, expected); break; } /* This segment is correct and we apply */ err = apply_segment_header_device(wb, seg, header); if (err) break; *max_id = le64_to_cpu(header->id); } vfree(rambuf); return err; } static int apply_valid_segments(struct wb_device *wb, u64 *max_id) { /* * Fast path. * If the max_id is zero, there is obviously no valid segments. * For the fast initialization, we quit here immediately. */ if (!(*max_id)) return 0; return do_apply_valid_segments(wb, max_id); } static int infer_last_writeback_id(struct wb_device *wb) { int err = 0; u64 inferred_last_writeback_id; u64 record_id; struct superblock_record_device uninitialized_var(record); err = read_superblock_record(&record, wb); if (err) return err; inferred_last_writeback_id = SUB_ID(atomic64_read(&wb->last_flushed_segment_id), wb->nr_segments); /* * If last_writeback_id is recorded on the super block * we can eliminate unnecessary writeback for the segments that were * written back before. */ record_id = le64_to_cpu(record.last_writeback_segment_id); if (record_id > inferred_last_writeback_id) { u64 id; for (id = inferred_last_writeback_id + 1; id <= record_id; id++) mark_clean_seg(wb, get_segment_header_by_id(wb, id)); inferred_last_writeback_id = record_id; } atomic64_set(&wb->last_writeback_segment_id, inferred_last_writeback_id); return err; } /* * Replay all the log on the cache device to reconstruct the in-memory metadata. * * Algorithm: * 1. Find the maximum id * 2. Start from the right. iterate all the log. * 2. Skip if id=0 or checkum incorrect * 2. Apply otherwise. * * This algorithm is robust for floppy SSD that may write a segment partially * or lose data on its buffer on power fault. */ static int replay_log_on_cache(struct wb_device *wb) { int err = 0; u64 max_id; err = find_max_id(wb, &max_id); if (err) { DMERR("find_max_id failed"); return err; } err = apply_valid_segments(wb, &max_id); if (err) { DMERR("apply_valid_segments failed"); return err; } /* Setup last_flushed_segment_id */ atomic64_set(&wb->last_flushed_segment_id, max_id); /* Setup last_queued_segment_id */ atomic64_set(&wb->last_queued_segment_id, max_id); /* Setup last_writeback_segment_id */ infer_last_writeback_id(wb); return err; } /* * Acquire and initialize the first segment header for our caching. */ static void prepare_first_seg(struct wb_device *wb) { u64 init_segment_id = atomic64_read(&wb->last_flushed_segment_id) + 1; acquire_new_seg(wb, init_segment_id); cursor_init(wb); } /* * Recover all the cache state from the persistent devices */ static int recover_cache(struct wb_device *wb) { int err = 0; err = replay_log_on_cache(wb); if (err) { DMERR("replay_log_on_cache failed"); return err; } prepare_first_seg(wb); return 0; } /*----------------------------------------------------------------------------*/ static struct writeback_segment *alloc_writeback_segment(struct wb_device *wb, gfp_t gfp) { u8 i; struct writeback_segment *writeback_seg = kmalloc(sizeof(*writeback_seg), gfp); if (!writeback_seg) goto bad_writeback_seg; writeback_seg->ios = kmalloc(wb->nr_caches_inseg * sizeof(struct writeback_io), gfp); if (!writeback_seg->ios) goto bad_ios; writeback_seg->buf = vmalloc((1 << (SEGMENT_SIZE_ORDER + 9)) - (1 << 12)); if (!writeback_seg->buf) goto bad_buf; for (i = 0; i < wb->nr_caches_inseg; i++) { struct writeback_io *writeback_io = writeback_seg->ios + i; writeback_io->data = writeback_seg->buf + (i << 12); } return writeback_seg; bad_buf: kfree(writeback_seg->ios); bad_ios: kfree(writeback_seg); bad_writeback_seg: return NULL; } static void free_writeback_segment(struct wb_device *wb, struct writeback_segment *writeback_seg) { vfree(writeback_seg->buf); kfree(writeback_seg->ios); kfree(writeback_seg); } /* * Try to allocate new writeback buffer by the @nr_batch size. * On success, it frees the old buffer. * * Bad user may set # of batches that can hardly allocate. * This function is even robust in such case. */ static void free_writeback_ios(struct wb_device *wb) { size_t i; for (i = 0; i < wb->nr_cur_batched_writeback; i++) free_writeback_segment(wb, *(wb->writeback_segs + i)); kfree(wb->writeback_segs); } /* * Request to allocate data structures to write back @nr_batch segments. * Previous structures are preserved in case of failure. */ int try_alloc_writeback_ios(struct wb_device *wb, size_t nr_batch, gfp_t gfp) { int err = 0; size_t i; struct writeback_segment **writeback_segs = kzalloc( nr_batch * sizeof(struct writeback_segment *), gfp); if (!writeback_segs) return -ENOMEM; for (i = 0; i < nr_batch; i++) { struct writeback_segment *alloced = alloc_writeback_segment(wb, gfp); if (!alloced) { size_t j; for (j = 0; j < i; j++) free_writeback_segment(wb, writeback_segs[j]); kfree(writeback_segs); DMERR("Failed to allocate writeback_segs"); return -ENOMEM; } writeback_segs[i] = alloced; } /* * Free old buffers if exists. * wb->writeback_segs is firstly NULL under constructor .ctr. */ if (wb->writeback_segs) free_writeback_ios(wb); /* And then swap by new values */ wb->writeback_segs = writeback_segs; wb->nr_writeback_segs = nr_batch; return err; } /*----------------------------------------------------------------------------*/ #define CREATE_DAEMON(name) \ do { \ wb->name = kthread_create( \ name##_proc, wb, "dmwb_" #name); \ if (IS_ERR(wb->name)) { \ err = PTR_ERR(wb->name); \ wb->name = NULL; \ DMERR("couldn't spawn " #name); \ goto bad_##name; \ } \ wake_up_process(wb->name); \ } while (0) /* * Alloc and then setup the initial state of the metadata * * Metadata: * - Segment header array * - Metablocks * - Hash table */ static int init_metadata(struct wb_device *wb) { int err = 0; err = init_segment_header_array(wb); if (err) { DMERR("init_segment_header_array failed"); goto bad_alloc_segment_header_array; } err = ht_empty_init(wb); if (err) { DMERR("ht_empty_init failed"); goto bad_alloc_ht; } return err; bad_alloc_ht: free_segment_header_array(wb); bad_alloc_segment_header_array: return err; } static void free_metadata(struct wb_device *wb) { free_ht(wb); free_segment_header_array(wb); } static int init_writeback_daemon(struct wb_device *wb) { int err = 0; size_t nr_batch; atomic_set(&wb->writeback_fail_count, 0); atomic_set(&wb->writeback_io_count, 0); nr_batch = 32; wb->nr_max_batched_writeback = nr_batch; if (try_alloc_writeback_ios(wb, nr_batch, GFP_KERNEL)) return -ENOMEM; init_waitqueue_head(&wb->writeback_wait_queue); init_waitqueue_head(&wb->wait_drop_caches); init_waitqueue_head(&wb->writeback_io_wait_queue); wb->allow_writeback = false; wb->urge_writeback = false; wb->force_drop = false; CREATE_DAEMON(writeback_daemon); return err; bad_writeback_daemon: free_writeback_ios(wb); return err; } static int init_flush_daemon(struct wb_device *wb) { int err = 0; init_waitqueue_head(&wb->flush_wait_queue); CREATE_DAEMON(flush_daemon); return err; bad_flush_daemon: return err; } static int init_flush_barrier_work(struct wb_device *wb) { wb->barrier_wq = create_singlethread_workqueue("dmwb_barrier"); if (!wb->barrier_wq) { DMERR("Failed to allocate barrier_wq"); return -ENOMEM; } bio_list_init(&wb->barrier_ios); INIT_WORK(&wb->flush_barrier_work, flush_barrier_ios); return 0; } static int init_writeback_modulator(struct wb_device *wb) { int err = 0; wb->writeback_threshold = 0; CREATE_DAEMON(writeback_modulator); return err; bad_writeback_modulator: return err; } static int init_sb_record_updater(struct wb_device *wb) { int err = 0; wb->update_sb_record_interval = 0; CREATE_DAEMON(sb_record_updater); return err; bad_sb_record_updater: return err; } static int init_data_synchronizer(struct wb_device *wb) { int err = 0; wb->sync_data_interval = 0; CREATE_DAEMON(data_synchronizer); return err; bad_data_synchronizer: return err; } int resume_cache(struct wb_device *wb) { int err = 0; wb->nr_segments = calc_nr_segments(wb->cache_dev, wb); wb->nr_caches_inseg = (1 << (SEGMENT_SIZE_ORDER - 3)) - 1; wb->nr_caches = wb->nr_segments * wb->nr_caches_inseg; err = init_devices(wb); if (err) goto bad_devices; err = init_metadata(wb); if (err) goto bad_metadata; err = init_writeback_daemon(wb); if (err) { DMERR("init_writeback_daemon failed"); goto bad_writeback_daemon; } err = recover_cache(wb); if (err) { DMERR("recover_cache failed"); goto bad_recover; } err = init_flush_daemon(wb); if (err) { DMERR("init_flush_daemon failed"); goto bad_flush_daemon; } err = init_flush_barrier_work(wb); if (err) { DMERR("init_flush_barrier_work failed"); goto bad_flush_barrier_work; } err = init_writeback_modulator(wb); if (err) { DMERR("init_writeback_modulator failed"); goto bad_modulator; } err = init_sb_record_updater(wb); if (err) { DMERR("init_sb_recorder failed"); goto bad_updater; } err = init_data_synchronizer(wb); if (err) { DMERR("init_data_synchronizer failed"); goto bad_synchronizer; } return err; bad_synchronizer: kthread_stop(wb->sb_record_updater); bad_updater: kthread_stop(wb->writeback_modulator); bad_modulator: destroy_workqueue(wb->barrier_wq); bad_flush_barrier_work: kthread_stop(wb->flush_daemon); bad_flush_daemon: bad_recover: kthread_stop(wb->writeback_daemon); free_writeback_ios(wb); bad_writeback_daemon: free_metadata(wb); bad_metadata: free_devices(wb); bad_devices: return err; } void free_cache(struct wb_device *wb) { /* * kthread_stop() wakes up the thread. * So we don't need to wake them up by ourselves. */ kthread_stop(wb->data_synchronizer); kthread_stop(wb->sb_record_updater); kthread_stop(wb->writeback_modulator); destroy_workqueue(wb->barrier_wq); kthread_stop(wb->flush_daemon); kthread_stop(wb->writeback_daemon); free_writeback_ios(wb); free_metadata(wb); free_devices(wb); } dm-writeboost-2.2.6/src/dm-writeboost-metadata.h000066400000000000000000000050261276770115000216460ustar00rootroot00000000000000/* * This file is part of dm-writeboost * Copyright (C) 2012-2016 Akira Hayakawa * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #ifndef DM_WRITEBOOST_METADATA_H #define DM_WRITEBOOST_METADATA_H /*----------------------------------------------------------------------------*/ struct segment_header * get_segment_header_by_id(struct wb_device *, u64 segment_id); struct rambuffer *get_rambuffer_by_id(struct wb_device *wb, u64 id); sector_t calc_mb_start_sector(struct wb_device *, struct segment_header *, u32 mb_idx); u8 mb_idx_inseg(struct wb_device *, u32 mb_idx); struct segment_header *mb_to_seg(struct wb_device *, struct metablock *); bool is_on_buffer(struct wb_device *, u32 mb_idx); /*----------------------------------------------------------------------------*/ struct lookup_key { sector_t sector; }; struct ht_head; struct ht_head *ht_get_head(struct wb_device *, struct lookup_key *); struct metablock *ht_lookup(struct wb_device *, struct ht_head *, struct lookup_key *); void ht_register(struct wb_device *, struct ht_head *, struct metablock *, struct lookup_key *); void ht_del(struct wb_device *, struct metablock *); void discard_caches_inseg(struct wb_device *, struct segment_header *); /*----------------------------------------------------------------------------*/ void prepare_segment_header_device(void *rambuffer, struct wb_device *, struct segment_header *src); u32 calc_checksum(void *rambuffer, u8 length); /*----------------------------------------------------------------------------*/ int try_alloc_writeback_ios(struct wb_device *, size_t nr_batch, gfp_t gfp); /*----------------------------------------------------------------------------*/ int resume_cache(struct wb_device *); void free_cache(struct wb_device *); /*----------------------------------------------------------------------------*/ #endif dm-writeboost-2.2.6/src/dm-writeboost-target.c000066400000000000000000001355071276770115000213570ustar00rootroot00000000000000/* * dm-writeboost * Log-structured Caching for Linux * * This file is part of dm-writeboost * Copyright (C) 2012-2016 Akira Hayakawa * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #include "dm-writeboost.h" #include "dm-writeboost-metadata.h" #include "dm-writeboost-daemon.h" #include "linux/sort.h" /*----------------------------------------------------------------------------*/ void do_check_buffer_alignment(void *buf, const char *name, const char *caller) { unsigned long addr = (unsigned long) buf; if (!IS_ALIGNED(addr, 1 << 9)) { DMCRIT("@%s in %s is not sector-aligned. I/O buffer must be sector-aligned.", name, caller); BUG(); } } /*----------------------------------------------------------------------------*/ struct wb_io { struct work_struct work; int err; unsigned long err_bits; struct dm_io_request *io_req; unsigned num_regions; struct dm_io_region *regions; }; static void wb_io_fn(struct work_struct *work) { struct wb_io *io = container_of(work, struct wb_io, work); io->err_bits = 0; io->err = dm_io(io->io_req, io->num_regions, io->regions, &io->err_bits); } int wb_io_internal(struct wb_device *wb, struct dm_io_request *io_req, unsigned num_regions, struct dm_io_region *regions, unsigned long *err_bits, bool thread, const char *caller) { int err = 0; if (thread) { struct wb_io io = { .io_req = io_req, .regions = regions, .num_regions = num_regions, }; ASSERT(io_req->notify.fn == NULL); INIT_WORK_ONSTACK(&io.work, wb_io_fn); queue_work(wb->io_wq, &io.work); flush_workqueue(wb->io_wq); destroy_work_on_stack(&io.work); /* Pair with INIT_WORK_ONSTACK */ err = io.err; if (err_bits) *err_bits = io.err_bits; } else { err = dm_io(io_req, num_regions, regions, err_bits); } /* err_bits can be NULL. */ if (err || (err_bits && *err_bits)) { char buf[BDEVNAME_SIZE]; dev_t dev = regions->bdev->bd_dev; unsigned long eb; if (!err_bits) eb = (~(unsigned long)0); else eb = *err_bits; format_dev_t(buf, dev); DMERR("%s() I/O error(%d), bits(%lu), dev(%s), sector(%llu), %s", caller, err, eb, buf, (unsigned long long) regions->sector, req_is_write(io_req) ? "write" : "read"); } return err; } sector_t dm_devsize(struct dm_dev *dev) { return i_size_read(dev->bdev->bd_inode) >> 9; } /*----------------------------------------------------------------------------*/ void bio_endio_compat(struct bio *bio, int error) { #if LINUX_VERSION_CODE >= KERNEL_VERSION(4,3,0) bio->bi_error = error; bio_endio(bio); #else bio_endio(bio, error); #endif } #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,14,0) #define bi_sector(bio) (bio)->bi_iter.bi_sector #else #define bi_sector(bio) (bio)->bi_sector #endif static void bio_remap(struct bio *bio, struct dm_dev *dev, sector_t sector) { bio->bi_bdev = dev->bdev; bi_sector(bio) = sector; } static u8 calc_offset(sector_t sector) { u32 tmp32; div_u64_rem(sector, 1 << 3, &tmp32); return tmp32; } static u8 bio_calc_offset(struct bio *bio) { return calc_offset(bi_sector(bio)); } static bool bio_is_fullsize(struct bio *bio) { return bio_sectors(bio) == (1 << 3); } static bool bio_is_write(struct bio *bio) { return bio_data_dir(bio) == WRITE; } /* * We use 4KB alignment address of original request the as the lookup key. */ static sector_t calc_cache_alignment(sector_t bio_sector) { return div_u64(bio_sector, 1 << 3) * (1 << 3); } /*----------------------------------------------------------------------------*/ /* * Wake up the processes on the wq if the wq is active. * (At least a process is waiting on it) * This function should only used for wq that is rarely active. * Otherwise ordinary wake_up() should be used instead. */ static void wake_up_active_wq(wait_queue_head_t *wq) { if (unlikely(waitqueue_active(wq))) wake_up(wq); } /*----------------------------------------------------------------------------*/ static u8 count_dirty_caches_remained(struct segment_header *seg) { u8 i, count = 0; struct metablock *mb; for (i = 0; i < seg->length; i++) { mb = seg->mb_array + i; if (mb->dirtiness.is_dirty) count++; } return count; } void inc_nr_dirty_caches(struct wb_device *wb) { ASSERT(wb); atomic64_inc(&wb->nr_dirty_caches); } void dec_nr_dirty_caches(struct wb_device *wb) { ASSERT(wb); if (atomic64_dec_and_test(&wb->nr_dirty_caches)) wake_up_interruptible(&wb->wait_drop_caches); } static bool taint_mb(struct wb_device *wb, struct metablock *mb, u8 data_bits) { unsigned long flags; bool flipped = false; ASSERT(data_bits > 0); spin_lock_irqsave(&wb->mb_lock, flags); if (!mb->dirtiness.is_dirty) { mb->dirtiness.is_dirty = true; flipped = true; } mb->dirtiness.data_bits |= data_bits; spin_unlock_irqrestore(&wb->mb_lock, flags); return flipped; } bool mark_clean_mb(struct wb_device *wb, struct metablock *mb) { unsigned long flags; bool flipped = false; spin_lock_irqsave(&wb->mb_lock, flags); if (mb->dirtiness.is_dirty) { mb->dirtiness.is_dirty = false; flipped = true; } spin_unlock_irqrestore(&wb->mb_lock, flags); return flipped; } /* * Read the dirtiness of a metablock at the moment. */ struct dirtiness read_mb_dirtiness(struct wb_device *wb, struct segment_header *seg, struct metablock *mb) { unsigned long flags; struct dirtiness retval; spin_lock_irqsave(&wb->mb_lock, flags); retval = mb->dirtiness; spin_unlock_irqrestore(&wb->mb_lock, flags); return retval; } /*----------------------------------------------------------------------------*/ void cursor_init(struct wb_device *wb) { wb->cursor = wb->current_seg->start_idx; wb->current_seg->length = 0; } /* * Advance the cursor and return the old cursor. * After returned, nr_inflight_ios is incremented to wait for this write to complete. */ static u32 advance_cursor(struct wb_device *wb) { u32 old; if (wb->cursor == wb->nr_caches) wb->cursor = 0; old = wb->cursor; wb->cursor++; wb->current_seg->length++; BUG_ON(wb->current_seg->length > wb->nr_caches_inseg); atomic_inc(&wb->current_seg->nr_inflight_ios); return old; } static bool needs_queue_seg(struct wb_device *wb) { bool rambuf_no_space = !mb_idx_inseg(wb, wb->cursor); return rambuf_no_space; } /*----------------------------------------------------------------------------*/ static void copy_barrier_requests(struct rambuffer *rambuf, struct wb_device *wb) { bio_list_init(&rambuf->barrier_ios); bio_list_merge(&rambuf->barrier_ios, &wb->barrier_ios); bio_list_init(&wb->barrier_ios); } static void prepare_rambuffer(struct rambuffer *rambuf, struct wb_device *wb, struct segment_header *seg) { rambuf->seg = seg; prepare_segment_header_device(rambuf->data, wb, seg); copy_barrier_requests(rambuf, wb); } static void init_rambuffer(struct wb_device *wb) { memset(wb->current_rambuf->data, 0, 1 << 12); } /* * Acquire a new RAM buffer for the new segment. */ static void __acquire_new_rambuffer(struct wb_device *wb, u64 id) { wait_for_flushing(wb, SUB_ID(id, NR_RAMBUF_POOL)); wb->current_rambuf = get_rambuffer_by_id(wb, id); init_rambuffer(wb); } static void __acquire_new_seg(struct wb_device *wb, u64 id) { struct segment_header *new_seg = get_segment_header_by_id(wb, id); /* * We wait for all requests to the new segment is consumed. * Mutex taken guarantees that no new I/O to this segment is coming in. */ wait_event(wb->inflight_ios_wq, !atomic_read(&new_seg->nr_inflight_ios)); wait_for_writeback(wb, SUB_ID(id, wb->nr_segments)); if (count_dirty_caches_remained(new_seg)) { DMERR("%u dirty caches remained. id:%llu", count_dirty_caches_remained(new_seg), id); BUG(); } discard_caches_inseg(wb, new_seg); /* * We mustn't set new id to the new segment before * all wait_* events are done since they uses those id for waiting. */ new_seg->id = id; wb->current_seg = new_seg; } /* * Acquire the new segment and RAM buffer for the following writes. * Guarantees all dirty caches in the segments are written back and * all metablocks in it are invalidated (Linked to null head). */ void acquire_new_seg(struct wb_device *wb, u64 id) { __acquire_new_rambuffer(wb, id); __acquire_new_seg(wb, id); } static void prepare_new_seg(struct wb_device *wb) { u64 next_id = wb->current_seg->id + 1; acquire_new_seg(wb, next_id); cursor_init(wb); } /*----------------------------------------------------------------------------*/ static void queue_flush_job(struct wb_device *wb) { wait_event(wb->inflight_ios_wq, !atomic_read(&wb->current_seg->nr_inflight_ios)); prepare_rambuffer(wb->current_rambuf, wb, wb->current_seg); smp_wmb(); atomic64_inc(&wb->last_queued_segment_id); wake_up_process(wb->flush_daemon); } static void queue_current_buffer(struct wb_device *wb) { queue_flush_job(wb); prepare_new_seg(wb); } /* * queue_current_buffer if the RAM buffer can't make space any more. */ static void might_queue_current_buffer(struct wb_device *wb) { if (needs_queue_seg(wb)) { update_nr_empty_segs(wb); queue_current_buffer(wb); } } /* * Flush out all the transient data at a moment but _NOT_ persistently. */ void flush_current_buffer(struct wb_device *wb) { struct segment_header *old_seg; mutex_lock(&wb->io_lock); old_seg = wb->current_seg; queue_current_buffer(wb); mutex_unlock(&wb->io_lock); wait_for_flushing(wb, old_seg->id); } /*----------------------------------------------------------------------------*/ static void inc_stat(struct wb_device *wb, int rw, bool found, bool on_buffer, bool fullsize) { atomic64_t *v; int i = 0; if (rw) i |= (1 << STAT_WRITE); if (found) i |= (1 << STAT_HIT); if (on_buffer) i |= (1 << STAT_ON_BUFFER); if (fullsize) i |= (1 << STAT_FULLSIZE); v = &wb->stat[i]; atomic64_inc(v); } static void clear_stat(struct wb_device *wb) { size_t i; for (i = 0; i < STATLEN; i++) { atomic64_t *v = &wb->stat[i]; atomic64_set(v, 0); } atomic64_set(&wb->count_non_full_flushed, 0); } /*----------------------------------------------------------------------------*/ #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,14,0) #define bv_vec struct bio_vec #define bv_page(vec) vec.bv_page #define bv_offset(vec) vec.bv_offset #define bv_len(vec) vec.bv_len #define bv_it struct bvec_iter #else #define bv_vec struct bio_vec * #define bv_page(vec) vec->bv_page #define bv_offset(vec) vec->bv_offset #define bv_len(vec) vec->bv_len #define bv_it int #endif /* * Incoming bio may have multiple bio vecs as a result bvec merging. * We shouldn't use bio_data directly to access to whole payload but * should iterate over the vector. */ static void copy_bio_payload(void *buf, struct bio *bio) { size_t sum = 0; bv_vec vec; bv_it it; bio_for_each_segment(vec, bio, it) { void *dst = kmap_atomic(bv_page(vec)); size_t l = bv_len(vec); memcpy(buf, dst + bv_offset(vec), l); kunmap_atomic(dst); buf += l; sum += l; } ASSERT(sum == (bio_sectors(bio) << 9)); } /* * Copy 512B buffer data to bio payload's i-th 512B area. */ static void __copy_to_bio_payload(struct bio *bio, void *buf, u8 i) { size_t head = 0; size_t tail = head; bv_vec vec; bv_it it; bio_for_each_segment(vec, bio, it) { size_t l = bv_len(vec); tail += l; if ((i << 9) < tail) { void *dst = kmap_atomic(bv_page(vec)); size_t offset = (i << 9) - head; BUG_ON((l - offset) < (1 << 9)); memcpy(dst + bv_offset(vec) + offset, buf, 1 << 9); kunmap_atomic(dst); return; } head += l; } BUG(); } /* * Copy 4KB buffer to bio payload with care to bio offset and copy bits. */ static void copy_to_bio_payload(struct bio *bio, void *buf, u8 copy_bits) { u8 offset = bio_calc_offset(bio); u8 i; for (i = 0; i < bio_sectors(bio); i++) { u8 i_offset = i + offset; if (copy_bits & (1 << i_offset)) __copy_to_bio_payload(bio, buf + (i_offset << 9), i); } } /*----------------------------------------------------------------------------*/ struct lookup_result { struct ht_head *head; /* Lookup head used */ struct lookup_key key; /* Lookup key used */ struct segment_header *found_seg; struct metablock *found_mb; bool found; /* Cache hit? */ bool on_buffer; /* Is the metablock found on the RAM buffer? */ }; /* * Lookup a bio relevant cache data. * In case of cache hit, nr_inflight_ios is incremented. */ static void cache_lookup(struct wb_device *wb, struct bio *bio, struct lookup_result *res) { res->key = (struct lookup_key) { .sector = calc_cache_alignment(bi_sector(bio)), }; res->head = ht_get_head(wb, &res->key); res->found_mb = ht_lookup(wb, res->head, &res->key); if (res->found_mb) { res->found_seg = mb_to_seg(wb, res->found_mb); atomic_inc(&res->found_seg->nr_inflight_ios); } res->found = (res->found_mb != NULL); res->on_buffer = false; if (res->found) res->on_buffer = is_on_buffer(wb, res->found_mb->idx); inc_stat(wb, bio_is_write(bio), res->found, res->on_buffer, bio_is_fullsize(bio)); } static void dec_inflight_ios(struct wb_device *wb, struct segment_header *seg) { if (atomic_dec_and_test(&seg->nr_inflight_ios)) wake_up_active_wq(&wb->inflight_ios_wq); } /*----------------------------------------------------------------------------*/ static u8 to_mask(u8 offset, u8 count) { u8 i; u8 result = 0; if (count == 8) { result = 255; } else { for (i = 0; i < count; i++) result |= (1 << (i + offset)); } return result; } static int fill_payload_by_backing(struct wb_device *wb, struct bio *bio) { struct dm_io_request io_req; struct dm_io_region region; sector_t start = bi_sector(bio); u8 offset = calc_offset(start); u8 len = bio_sectors(bio); u8 copy_bits = to_mask(offset, len); int err = 0; void *buf = mempool_alloc(wb->buf_8_pool, GFP_NOIO); if (!buf) return -ENOMEM; io_req = (struct dm_io_request) { WB_IO_READ, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_KMEM, .mem.ptr.addr = buf + (offset << 9), }; region = (struct dm_io_region) { .bdev = wb->backing_dev->bdev, .sector = start, .count = len, }; err = wb_io(&io_req, 1, ®ion, NULL, true); if (err) goto bad; copy_to_bio_payload(bio, buf, copy_bits); bad: mempool_free(buf, wb->buf_8_pool); return err; } /* * Get the reference to the 4KB-aligned data in RAM buffer. * Since it only takes the reference caller need not to free the pointer. */ static void *ref_buffered_mb(struct wb_device *wb, struct metablock *mb) { sector_t offset = ((mb_idx_inseg(wb, mb->idx) + 1) << 3); return wb->current_rambuf->data + (offset << 9); } /* * Read cache block of the mb. * Caller should free the returned pointer after used by mempool_alloc(). */ static void *read_mb(struct wb_device *wb, struct segment_header *seg, struct metablock *mb, u8 data_bits) { u8 i; void *result = mempool_alloc(wb->buf_8_pool, GFP_NOIO); if (!result) return NULL; for (i = 0; i < 8; i++) { int err = 0; struct dm_io_request io_req; struct dm_io_region region; if (!(data_bits & (1 << i))) continue; io_req = (struct dm_io_request) { WB_IO_READ, .client = wb->io_client, .notify.fn = NULL, .mem.type = DM_IO_KMEM, .mem.ptr.addr = result + (i << 9), }; region = (struct dm_io_region) { .bdev = wb->cache_dev->bdev, .sector = calc_mb_start_sector(wb, seg, mb->idx) + i, .count = 1, }; err = wb_io(&io_req, 1, ®ion, NULL, true); if (err) { mempool_free(result, wb->buf_8_pool); return NULL; } } return result; } /*----------------------------------------------------------------------------*/ enum PBD_FLAG { PBD_NONE = 0, PBD_WILL_CACHE = 1, PBD_READ_SEG = 2, }; #if LINUX_VERSION_CODE >= KERNEL_VERSION(4,6,0) #define PER_BIO_DATA_SIZE per_io_data_size #else #define PER_BIO_DATA_SIZE per_bio_data_size #endif struct per_bio_data { enum PBD_FLAG type; union { u32 cell_idx; struct segment_header *seg; }; }; #define per_bio_data(wb, bio) ((struct per_bio_data *)dm_per_bio_data((bio), (wb)->ti->PER_BIO_DATA_SIZE)) /*----------------------------------------------------------------------------*/ #define read_cache_cell_from_node(node) rb_entry((node), struct read_cache_cell, rb_node) static void read_cache_add(struct read_cache_cells *cells, struct read_cache_cell *cell) { struct rb_node **rbp, *parent; rbp = &cells->rb_root.rb_node; parent = NULL; while (*rbp) { struct read_cache_cell *parent_cell; parent = *rbp; parent_cell = read_cache_cell_from_node(parent); if (cell->sector < parent_cell->sector) rbp = &(*rbp)->rb_left; else rbp = &(*rbp)->rb_right; } rb_link_node(&cell->rb_node, parent, rbp); rb_insert_color(&cell->rb_node, &cells->rb_root); } static struct read_cache_cell *lookup_read_cache_cell(struct wb_device *wb, sector_t sector) { struct rb_node **rbp, *parent; rbp = &wb->read_cache_cells->rb_root.rb_node; parent = NULL; while (*rbp) { struct read_cache_cell *parent_cell; parent = *rbp; parent_cell = read_cache_cell_from_node(parent); if (parent_cell->sector == sector) return parent_cell; if (sector < parent_cell->sector) rbp = &(*rbp)->rb_left; else rbp = &(*rbp)->rb_right; } return NULL; } static void read_cache_cancel_cells(struct read_cache_cells *cells, u32 n) { u32 i; u32 last = cells->cursor + cells->seqcount; if (last > cells->size) last = cells->size; for (i = cells->cursor; i < last; i++) { struct read_cache_cell *cell = cells->array + i; cell->cancelled = true; } } /* * Track the forefront read address and cancel cells in case of over threshold. * If the cell is cancelled foreground, we can save the memory copy in the background. */ static void read_cache_cancel_foreground(struct read_cache_cells *cells, struct read_cache_cell *new_cell) { if (new_cell->sector == (cells->last_sector + 8)) cells->seqcount++; else { cells->seqcount = 1; cells->over_threshold = false; } if (cells->seqcount > cells->threshold) { if (cells->over_threshold) new_cell->cancelled = true; else { cells->over_threshold = true; read_cache_cancel_cells(cells, cells->seqcount); } } cells->last_sector = new_cell->sector; } static bool reserve_read_cache_cell(struct wb_device *wb, struct bio *bio) { struct per_bio_data *pbd; struct read_cache_cells *cells = wb->read_cache_cells; struct read_cache_cell *found, *new_cell; ASSERT(cells->threshold > 0); if (!ACCESS_ONCE(wb->read_cache_threshold)) return false; if (!cells->cursor) return false; /* * We only cache 4KB read data for following reasons: * 1) Caching partial data (< 4KB) is likely meaningless. * 2) Caching partial data makes the read-caching mechanism very hard. */ if (!bio_is_fullsize(bio)) return false; /* * We don't need to reserve the same address twice * because it's either unchanged or invalidated. */ found = lookup_read_cache_cell(wb, bi_sector(bio)); if (found) return false; cells->cursor--; new_cell = cells->array + cells->cursor; new_cell->sector = bi_sector(bio); read_cache_add(cells, new_cell); pbd = per_bio_data(wb, bio); pbd->type = PBD_WILL_CACHE; pbd->cell_idx = cells->cursor; /* Cancel the new_cell if needed */ read_cache_cancel_foreground(cells, new_cell); return true; } static void might_cancel_read_cache_cell(struct wb_device *wb, struct bio *bio) { struct read_cache_cell *found; found = lookup_read_cache_cell(wb, calc_cache_alignment(bi_sector(bio))); if (found) found->cancelled = true; } static void read_cache_cell_copy_data(struct wb_device *wb, struct bio *bio, unsigned long error) { struct per_bio_data *pbd = per_bio_data(wb, bio); struct read_cache_cells *cells = wb->read_cache_cells; struct read_cache_cell *cell = cells->array + pbd->cell_idx; ASSERT(pbd->type == PBD_WILL_CACHE); /* Data can be broken. So don't stage. */ if (error) cell->cancelled = true; /* * We can omit copying if the cell is cancelled but * copying for a non-cancelled cell isn't problematic. */ if (!cell->cancelled) copy_bio_payload(cell->data, bio); if (atomic_dec_and_test(&cells->ack_count)) queue_work(cells->wq, &wb->read_cache_work); } /* * Get a read cache cell through simplified write path if the cell data isn't stale. */ static void inject_read_cache(struct wb_device *wb, struct read_cache_cell *cell) { struct metablock *mb; u32 _mb_idx_inseg; struct segment_header *seg; struct lookup_key key = { .sector = cell->sector, }; struct ht_head *head = ht_get_head(wb, &key); mutex_lock(&wb->io_lock); /* * if might_cancel_read_cache_cell() on the foreground * cancelled this cell, the data is now stale. */ if (cell->cancelled) { mutex_unlock(&wb->io_lock); return; } might_queue_current_buffer(wb); seg = wb->current_seg; _mb_idx_inseg = mb_idx_inseg(wb, advance_cursor(wb)); /* * We should copy the cell data into the rambuf with lock held * otherwise subsequent write data may be written first and then overwritten by * the old data in the cell. */ memcpy(wb->current_rambuf->data + ((_mb_idx_inseg + 1) << 12), cell->data, 1 << 12); mb = seg->mb_array + _mb_idx_inseg; ASSERT(!mb->dirtiness.is_dirty); mb->dirtiness.data_bits = 255; ht_register(wb, head, mb, &key); mutex_unlock(&wb->io_lock); dec_inflight_ios(wb, seg); } static void free_read_cache_cell_data(struct read_cache_cells *cells) { u32 i; for (i = 0; i < cells->size; i++) { struct read_cache_cell *cell = cells->array + i; vfree(cell->data); } } static struct read_cache_cells *alloc_read_cache_cells(struct wb_device *wb, u32 n) { struct read_cache_cells *cells; u32 i; cells = kmalloc(sizeof(struct read_cache_cells), GFP_KERNEL); if (!cells) return NULL; cells->size = n; cells->threshold = UINT_MAX; /* Default: every read will be cached */ cells->last_sector = ~0; cells->seqcount = 0; cells->over_threshold = false; cells->array = kmalloc(sizeof(struct read_cache_cell) * n, GFP_KERNEL); if (!cells->array) goto bad_cells_array; for (i = 0; i < cells->size; i++) { struct read_cache_cell *cell = cells->array + i; cell->data = vmalloc(1 << 12); if (!cell->data) { u32 j; for (j = 0; j < i; j++) { cell = cells->array + j; vfree(cell->data); } goto bad_cell_data; } } cells->wq = create_singlethread_workqueue("dmwb_read_cache"); if (!cells->wq) goto bad_wq; return cells; bad_wq: free_read_cache_cell_data(cells); bad_cell_data: kfree(cells->array); bad_cells_array: kfree(cells); return NULL; } static void free_read_cache_cells(struct wb_device *wb) { struct read_cache_cells *cells = wb->read_cache_cells; destroy_workqueue(cells->wq); /* This drains wq. So, must precede the others */ free_read_cache_cell_data(cells); kfree(cells->array); kfree(cells); } static void reinit_read_cache_cells(struct wb_device *wb) { struct read_cache_cells *cells = wb->read_cache_cells; u32 i, cur_threshold; mutex_lock(&wb->io_lock); cells->rb_root = RB_ROOT; cells->cursor = cells->size; atomic_set(&cells->ack_count, cells->size); for (i = 0; i < cells->size; i++) { struct read_cache_cell *cell = cells->array + i; cell->cancelled = false; } cur_threshold = ACCESS_ONCE(wb->read_cache_threshold); if (cur_threshold && (cur_threshold != cells->threshold)) { cells->threshold = cur_threshold; cells->over_threshold = false; } mutex_unlock(&wb->io_lock); } /* * Cancel cells [first, last) */ static void visit_and_cancel_cells(struct rb_node *first, struct rb_node *last) { struct rb_node *rbp = first; while (rbp != last) { struct read_cache_cell *cell = read_cache_cell_from_node(rbp); cell->cancelled = true; rbp = rb_next(rbp); } } /* * Find out sequence from cells and cancel them if larger than threshold. */ static void read_cache_cancel_background(struct read_cache_cells *cells) { struct rb_node *rbp = rb_first(&cells->rb_root); struct rb_node *seqhead = rbp; sector_t last_sector = ~0; u32 seqcount = 0; while (rbp) { struct read_cache_cell *cell = read_cache_cell_from_node(rbp); if (cell->sector == (last_sector + 8)) seqcount++; else { if (seqcount > cells->threshold) visit_and_cancel_cells(seqhead, rbp); seqcount = 1; seqhead = rbp; } last_sector = cell->sector; rbp = rb_next(rbp); } if (seqcount > cells->threshold) visit_and_cancel_cells(seqhead, rbp); } static void read_cache_proc(struct work_struct *work) { struct wb_device *wb = container_of(work, struct wb_device, read_cache_work); struct read_cache_cells *cells = wb->read_cache_cells; u32 i; read_cache_cancel_background(cells); for (i = 0; i < cells->size; i++) { struct read_cache_cell *cell = cells->array + i; inject_read_cache(wb, cell); } reinit_read_cache_cells(wb); } static int init_read_cache_cells(struct wb_device *wb) { struct read_cache_cells *cells; INIT_WORK(&wb->read_cache_work, read_cache_proc); cells = alloc_read_cache_cells(wb, wb->nr_read_cache_cells); if (!cells) return -ENOMEM; wb->read_cache_cells = cells; reinit_read_cache_cells(wb); return 0; } /*----------------------------------------------------------------------------*/ static void initialize_write_io(struct write_io *wio, struct bio *bio) { u8 offset = bio_calc_offset(bio); sector_t count = bio_sectors(bio); copy_bio_payload(wio->data + (offset << 9), bio); wio->data_bits = to_mask(offset, count); } static void memcpy_masked(void *to, u8 protect_bits, void *from, u8 copy_bits) { u8 i; for (i = 0; i < 8; i++) { bool will_copy = copy_bits & (1 << i); bool protected = protect_bits & (1 << i); if (will_copy && (!protected)) { size_t offset = (i << 9); memcpy(to + offset, from + offset, 1 << 9); } } } int prepare_overwrite(struct wb_device *wb, struct segment_header *seg, struct metablock *old_mb, struct write_io* wio, u8 overwrite_bits) { struct dirtiness dirtiness = read_mb_dirtiness(wb, seg, old_mb); bool needs_merge_prev_cache = !(overwrite_bits == 255) || !(dirtiness.data_bits == 255); if (!dirtiness.is_dirty) needs_merge_prev_cache = false; if (overwrite_bits == 255) needs_merge_prev_cache = false; if (unlikely(needs_merge_prev_cache)) { void *buf; wait_for_flushing(wb, seg->id); ASSERT(dirtiness.is_dirty); buf = read_mb(wb, seg, old_mb, dirtiness.data_bits); if (!buf) return -EIO; /* newer data should be prioritized */ memcpy_masked(wio->data, wio->data_bits, buf, dirtiness.data_bits); wio->data_bits |= dirtiness.data_bits; mempool_free(buf, wb->buf_8_pool); } if (mark_clean_mb(wb, old_mb)) dec_nr_dirty_caches(wb); ht_del(wb, old_mb); return 0; } /* * Get a new place to write. */ static struct metablock *prepare_new_write_pos(struct wb_device *wb) { struct metablock *ret = wb->current_seg->mb_array + mb_idx_inseg(wb, advance_cursor(wb)); ASSERT(!ret->dirtiness.is_dirty); ret->dirtiness.data_bits = 0; return ret; } static void write_on_rambuffer(struct wb_device *wb, struct metablock *write_pos, struct write_io *wio) { size_t mb_offset = (mb_idx_inseg(wb, write_pos->idx) + 1) << 12; void *mb_data = wb->current_rambuf->data + mb_offset; if (wio->data_bits == 255) memcpy(mb_data, wio->data, 1 << 12); else memcpy_masked(mb_data, 0, wio->data, wio->data_bits); } static int do_process_write(struct wb_device *wb, struct bio *bio) { int err = 0; struct metablock *write_pos = NULL; struct lookup_result res; struct write_io wio; wio.data = mempool_alloc(wb->buf_8_pool, GFP_NOIO); if (!wio.data) return -ENOMEM; initialize_write_io(&wio, bio); mutex_lock(&wb->io_lock); cache_lookup(wb, bio, &res); if (res.found) { if (unlikely(res.on_buffer)) { write_pos = res.found_mb; goto do_write; } else { err = prepare_overwrite(wb, res.found_seg, res.found_mb, &wio, wio.data_bits); dec_inflight_ios(wb, res.found_seg); if (err) goto out; } } else might_cancel_read_cache_cell(wb, bio); might_queue_current_buffer(wb); write_pos = prepare_new_write_pos(wb); do_write: ASSERT(write_pos); write_on_rambuffer(wb, write_pos, &wio); if (taint_mb(wb, write_pos, wio.data_bits)) inc_nr_dirty_caches(wb); ht_register(wb, res.head, write_pos, &res.key); out: mutex_unlock(&wb->io_lock); mempool_free(wio.data, wb->buf_8_pool); return err; } static int complete_process_write(struct wb_device *wb, struct bio *bio) { dec_inflight_ios(wb, wb->current_seg); /* * bio with FUA flag has data. * We first handle it as a normal write bio and then as a barrier bio. */ if (bio_is_fua(bio)) { queue_barrier_io(wb, bio); return DM_MAPIO_SUBMITTED; } bio_endio_compat(bio, 0); return DM_MAPIO_SUBMITTED; } /* * (Locking) Dirtiness of a metablock * ---------------------------------- * A cache data is placed either on RAM buffer or SSD if it was flushed. * To make locking easy, we simplify the rule for the dirtiness of a cache data. * 1) If the data is on the RAM buffer, the dirtiness only "increases". * 2) If the data is, on the other hand, on the SSD after flushed the dirtiness * only "decreases". * * These simple rules can remove the possibility of dirtiness fluctuate on the * RAM buffer. */ /* * (Locking) Refcount (in_flight_*) * -------------------------------- * * The basic common idea is * 1) Increment the refcount inside lock * 2) Wait for decrement outside the lock * * process_write: * do_process_write: * mutex_lock (to serialize write) * inc in_flight_ios # refcount on the dst segment * mutex_unlock * * complete_process_write: * dec in_flight_ios * bio_endio(bio) */ static int process_write_wb(struct wb_device *wb, struct bio *bio) { int err = do_process_write(wb, bio); if (err) return err; return complete_process_write(wb, bio); } static int process_write_wa(struct wb_device *wb, struct bio *bio) { struct lookup_result res; mutex_lock(&wb->io_lock); cache_lookup(wb, bio, &res); if (res.found) { dec_inflight_ios(wb, res.found_seg); ht_del(wb, res.found_mb); } might_cancel_read_cache_cell(wb, bio); mutex_unlock(&wb->io_lock); bio_remap(bio, wb->backing_dev, bi_sector(bio)); return DM_MAPIO_REMAPPED; } static int process_write(struct wb_device *wb, struct bio *bio) { return wb->write_around_mode ? process_write_wa(wb, bio) : process_write_wb(wb, bio); } struct read_backing_async_context { struct wb_device *wb; struct bio *bio; }; static void read_backing_async_callback_onstack(unsigned long error, struct read_backing_async_context *ctx) { ASSERT(bio_is_fullsize(ctx->bio)); read_cache_cell_copy_data(ctx->wb, ctx->bio, error); if (error) bio_io_error(ctx->bio); else bio_endio_compat(ctx->bio, 0); } static void read_backing_async_callback(unsigned long error, void *context) { struct read_backing_async_context *ctx = context; read_backing_async_callback_onstack(error, ctx); kfree(ctx); } static int read_backing_async(struct wb_device *wb, struct bio *bio) { int err = 0; struct dm_io_request io_req; struct dm_io_region region; struct read_backing_async_context *ctx = kmalloc(sizeof(struct read_backing_async_context), GFP_NOIO); if (!ctx) return -ENOMEM; ctx->wb = wb; ctx->bio = bio; ASSERT(bio_is_fullsize(bio)); io_req = (struct dm_io_request) { WB_IO_READ, .client = wb->io_client, #if LINUX_VERSION_CODE >= KERNEL_VERSION(3,14,0) .mem.type = DM_IO_BIO, .mem.ptr.bio = bio, #else .mem.type = DM_IO_BVEC, .mem.ptr.bvec = bio->bi_io_vec + bio->bi_idx, #endif .notify.fn = read_backing_async_callback, .notify.context = ctx }; region = (struct dm_io_region) { .bdev = wb->backing_dev->bdev, .sector = bi_sector(bio), .count = 8 }; err = wb_io(&io_req, 1, ®ion, NULL, false); if (err) kfree(ctx); return err; } static int process_read(struct wb_device *wb, struct bio *bio) { struct lookup_result res; struct dirtiness dirtiness; struct per_bio_data *pbd; bool reserved = false; mutex_lock(&wb->io_lock); cache_lookup(wb, bio, &res); if (!res.found) reserved = reserve_read_cache_cell(wb, bio); mutex_unlock(&wb->io_lock); if (!res.found) { if (reserved) { /* * Remapping clone bio to the backing store leads to * empty payload in clone_endio(). * To avoid caching junk data, we need this workaround * to call dm_io() to certainly fill the bio payload. */ if (read_backing_async(wb, bio)) { struct read_backing_async_context ctx = { .wb = wb, .bio = bio }; read_backing_async_callback_onstack(1, &ctx); } return DM_MAPIO_SUBMITTED; } else { bio_remap(bio, wb->backing_dev, bi_sector(bio)); return DM_MAPIO_REMAPPED; } } dirtiness = read_mb_dirtiness(wb, res.found_seg, res.found_mb); if (unlikely(res.on_buffer)) { int err = fill_payload_by_backing(wb, bio); if (err) goto read_buffered_mb_exit; if (dirtiness.is_dirty) copy_to_bio_payload(bio, ref_buffered_mb(wb, res.found_mb), dirtiness.data_bits); read_buffered_mb_exit: dec_inflight_ios(wb, res.found_seg); if (unlikely(err)) bio_io_error(bio); else bio_endio_compat(bio, 0); return DM_MAPIO_SUBMITTED; } /* * We need to wait for the segment to be flushed to the cache device. * Without this, we might read the wrong data from the cache device. */ wait_for_flushing(wb, res.found_seg->id); if (unlikely(dirtiness.data_bits != 255)) { int err = fill_payload_by_backing(wb, bio); if (err) goto read_mb_exit; if (dirtiness.is_dirty) { void *buf = read_mb(wb, res.found_seg, res.found_mb, dirtiness.data_bits); if (!buf) { err = -EIO; goto read_mb_exit; } copy_to_bio_payload(bio, buf, dirtiness.data_bits); mempool_free(buf, wb->buf_8_pool); } read_mb_exit: dec_inflight_ios(wb, res.found_seg); if (unlikely(err)) bio_io_error(bio); else bio_endio_compat(bio, 0); return DM_MAPIO_SUBMITTED; } pbd = per_bio_data(wb, bio); pbd->type = PBD_READ_SEG; pbd->seg = res.found_seg; bio_remap(bio, wb->cache_dev, calc_mb_start_sector(wb, res.found_seg, res.found_mb->idx) + bio_calc_offset(bio)); return DM_MAPIO_REMAPPED; } static int process_bio(struct wb_device *wb, struct bio *bio) { return bio_is_write(bio) ? process_write(wb, bio) : process_read(wb, bio); } static int process_barrier_bio(struct wb_device *wb, struct bio *bio) { /* barrier bio doesn't have data */ ASSERT(bio_sectors(bio) == 0); queue_barrier_io(wb, bio); return DM_MAPIO_SUBMITTED; } static int writeboost_map(struct dm_target *ti, struct bio *bio) { struct wb_device *wb = ti->private; struct per_bio_data *pbd = per_bio_data(wb, bio); pbd->type = PBD_NONE; if (bio_is_barrier(bio)) return process_barrier_bio(wb, bio); return process_bio(wb, bio); } static int writeboost_end_io(struct dm_target *ti, struct bio *bio, int error) { struct wb_device *wb = ti->private; struct per_bio_data *pbd = per_bio_data(wb, bio); switch (pbd->type) { case PBD_NONE: case PBD_WILL_CACHE: return 0; case PBD_READ_SEG: dec_inflight_ios(wb, pbd->seg); return 0; default: BUG(); } } static int consume_essential_argv(struct wb_device *wb, struct dm_arg_set *as) { int err = 0; struct dm_target *ti = wb->ti; err = dm_get_device(ti, dm_shift_arg(as), dm_table_get_mode(ti->table), &wb->backing_dev); if (err) { DMERR("Failed to get backing_dev"); return err; } err = dm_get_device(ti, dm_shift_arg(as), dm_table_get_mode(ti->table), &wb->cache_dev); if (err) { DMERR("Failed to get cache_dev"); goto bad_get_cache; } return err; bad_get_cache: dm_put_device(ti, wb->backing_dev); return err; } #define consume_kv(name, nr, is_static) { \ if (!strcasecmp(key, #name)) { \ if (!argc) \ break; \ if (test_bit(WB_CREATED, &wb->flags) && is_static) { \ DMERR("%s is a static option", #name); \ break; \ } \ err = dm_read_arg(_args + (nr), as, &tmp, &ti->error); \ if (err) { \ DMERR("%s", ti->error); \ break; \ } \ wb->name = tmp; \ } } static int do_consume_optional_argv(struct wb_device *wb, struct dm_arg_set *as, unsigned argc) { int err = 0; struct dm_target *ti = wb->ti; static struct dm_arg _args[] = { {0, 100, "Invalid writeback_threshold"}, {1, 32, "Invalid nr_max_batched_writeback"}, {0, 3600, "Invalid update_sb_record_interval"}, {0, 3600, "Invalid sync_data_interval"}, {0, 127, "Invalid read_cache_threshold"}, {0, 1, "Invalid write_around_mode"}, {1, 2048, "Invalid nr_read_cache_cells"}, }; unsigned tmp; while (argc) { const char *key = dm_shift_arg(as); argc--; err = -EINVAL; consume_kv(writeback_threshold, 0, false); consume_kv(nr_max_batched_writeback, 1, false); consume_kv(update_sb_record_interval, 2, false); consume_kv(sync_data_interval, 3, false); consume_kv(read_cache_threshold, 4, false); consume_kv(write_around_mode, 5, true); consume_kv(nr_read_cache_cells, 6, true); if (!err) { argc--; } else { ti->error = "Invalid optional key"; break; } } return err; } static int consume_optional_argv(struct wb_device *wb, struct dm_arg_set *as) { int err = 0; struct dm_target *ti = wb->ti; static struct dm_arg _args[] = { {0, 14, "Invalid optional argc"}, }; unsigned argc = 0; if (as->argc) { err = dm_read_arg_group(_args, as, &argc, &ti->error); if (err) { DMERR("%s", ti->error); return err; } } return do_consume_optional_argv(wb, as, argc); } DECLARE_DM_KCOPYD_THROTTLE_WITH_MODULE_PARM(wb_copy_throttle, "A percentage of time allocated for one-shot writeback"); static int init_core_struct(struct dm_target *ti) { int err = 0; struct wb_device *wb; err = dm_set_target_max_io_len(ti, 1 << 3); if (err) { DMERR("Failed to set max_io_len"); return err; } ti->num_flush_bios = 1; ti->flush_supported = true; /* * dm-writeboost does't support TRIM * * https://github.com/akiradeveloper/dm-writeboost/issues/110 * - discarding backing data only violates DRAT * - strictly discarding both cache blocks and backing data is nearly impossible * considering cache hits may occur partially. */ ti->num_discard_bios = 0; ti->discards_supported = false; ti->PER_BIO_DATA_SIZE = sizeof(struct per_bio_data); wb = kzalloc(sizeof(*wb), GFP_KERNEL); if (!wb) { DMERR("Failed to allocate wb"); return -ENOMEM; } ti->private = wb; wb->ti = ti; wb->copier = dm_kcopyd_client_create(&dm_kcopyd_throttle); if (IS_ERR(wb->copier)) { err = PTR_ERR(wb->copier); goto bad_kcopyd_client; } wb->buf_1_cachep = kmem_cache_create("dmwb_buf_1", 1 << 9, 1 << 9, SLAB_RED_ZONE, NULL); if (!wb->buf_1_cachep) { err = -ENOMEM; goto bad_buf_1_cachep; } wb->buf_1_pool = mempool_create_slab_pool(16, wb->buf_1_cachep); if (!wb->buf_1_pool) { err = -ENOMEM; goto bad_buf_1_pool; } wb->buf_8_cachep = kmem_cache_create("dmwb_buf_8", 1 << 12, 1 << 12, SLAB_RED_ZONE, NULL); if (!wb->buf_8_cachep) { err = -ENOMEM; goto bad_buf_8_cachep; } wb->buf_8_pool = mempool_create_slab_pool(16, wb->buf_8_cachep); if (!wb->buf_8_pool) { err = -ENOMEM; goto bad_buf_8_pool; } wb->io_wq = create_singlethread_workqueue("dmwb_io"); if (!wb->io_wq) { DMERR("Failed to allocate io_wq"); err = -ENOMEM; goto bad_io_wq; } wb->io_client = dm_io_client_create(); if (IS_ERR(wb->io_client)) { DMERR("Failed to allocate io_client"); err = PTR_ERR(wb->io_client); goto bad_io_client; } mutex_init(&wb->io_lock); init_waitqueue_head(&wb->inflight_ios_wq); spin_lock_init(&wb->mb_lock); atomic64_set(&wb->nr_dirty_caches, 0); clear_bit(WB_CREATED, &wb->flags); return err; bad_io_client: destroy_workqueue(wb->io_wq); bad_io_wq: mempool_destroy(wb->buf_8_pool); bad_buf_8_pool: kmem_cache_destroy(wb->buf_8_cachep); bad_buf_8_cachep: mempool_destroy(wb->buf_1_pool); bad_buf_1_pool: kmem_cache_destroy(wb->buf_1_cachep); bad_buf_1_cachep: dm_kcopyd_client_destroy(wb->copier); bad_kcopyd_client: kfree(wb); return err; } static void free_core_struct(struct wb_device *wb) { dm_io_client_destroy(wb->io_client); destroy_workqueue(wb->io_wq); mempool_destroy(wb->buf_8_pool); kmem_cache_destroy(wb->buf_8_cachep); mempool_destroy(wb->buf_1_pool); kmem_cache_destroy(wb->buf_1_cachep); dm_kcopyd_client_destroy(wb->copier); kfree(wb); } static int copy_ctr_args(struct wb_device *wb, int argc, const char **argv) { unsigned i; const char **copy; copy = kcalloc(argc, sizeof(*copy), GFP_KERNEL); if (!copy) return -ENOMEM; for (i = 0; i < argc; i++) { copy[i] = kstrdup(argv[i], GFP_KERNEL); if (!copy[i]) { while (i--) kfree(copy[i]); kfree(copy); return -ENOMEM; } } wb->nr_ctr_args = argc; wb->ctr_args = copy; return 0; } static void free_ctr_args(struct wb_device *wb) { int i; for (i = 0; i < wb->nr_ctr_args; i++) kfree(wb->ctr_args[i]); kfree(wb->ctr_args); } #define save_arg(name) wb->name##_saved = wb->name #define restore_arg(name) if (wb->name##_saved) { wb->name = wb->name##_saved; } /* * Create a writeboost device * * * <#optional args> * optionals are unordered lists of k-v pair. * * See doc for detail. */ static int writeboost_ctr(struct dm_target *ti, unsigned int argc, char **argv) { int err = 0; struct wb_device *wb; struct dm_arg_set as; as.argc = argc; as.argv = argv; err = init_core_struct(ti); if (err) { ti->error = "init_core_struct failed"; return err; } wb = ti->private; err = copy_ctr_args(wb, argc - 2, (const char **)argv + 2); if (err) { ti->error = "copy_ctr_args failed"; goto bad_ctr_args; } err = consume_essential_argv(wb, &as); if (err) { ti->error = "consume_essential_argv failed"; goto bad_essential_argv; } err = consume_optional_argv(wb, &as); if (err) { ti->error = "consume_optional_argv failed"; goto bad_optional_argv; } save_arg(writeback_threshold); save_arg(nr_max_batched_writeback); save_arg(update_sb_record_interval); save_arg(sync_data_interval); save_arg(read_cache_threshold); save_arg(nr_read_cache_cells); err = resume_cache(wb); if (err) { ti->error = "resume_cache failed"; goto bad_resume_cache; } wb->nr_read_cache_cells = 2048; /* 8MB */ restore_arg(nr_read_cache_cells); err = init_read_cache_cells(wb); if (err) { ti->error = "init_read_cache_cells failed"; goto bad_read_cache_cells; } clear_stat(wb); set_bit(WB_CREATED, &wb->flags); restore_arg(writeback_threshold); restore_arg(nr_max_batched_writeback); restore_arg(update_sb_record_interval); restore_arg(sync_data_interval); restore_arg(read_cache_threshold); return err; bad_read_cache_cells: free_cache(wb); bad_resume_cache: dm_put_device(ti, wb->cache_dev); dm_put_device(ti, wb->backing_dev); bad_optional_argv: bad_essential_argv: free_ctr_args(wb); bad_ctr_args: free_core_struct(wb); ti->private = NULL; return err; } static void writeboost_dtr(struct dm_target *ti) { struct wb_device *wb = ti->private; free_read_cache_cells(wb); free_cache(wb); dm_put_device(ti, wb->cache_dev); dm_put_device(ti, wb->backing_dev); free_ctr_args(wb); free_core_struct(wb); ti->private = NULL; } /*----------------------------------------------------------------------------*/ /* * .postsuspend is called before .dtr. * We flush out all the transient data and make them persistent. */ static void writeboost_postsuspend(struct dm_target *ti) { struct wb_device *wb = ti->private; flush_current_buffer(wb); blkdev_issue_flush(wb->cache_dev->bdev, GFP_NOIO, NULL); } static int writeboost_message(struct dm_target *ti, unsigned argc, char **argv) { struct wb_device *wb = ti->private; struct dm_arg_set as; as.argc = argc; as.argv = argv; if (!strcasecmp(argv[0], "clear_stat")) { clear_stat(wb); return 0; } if (!strcasecmp(argv[0], "drop_caches")) { int err = 0; wb->force_drop = true; err = wait_event_interruptible(wb->wait_drop_caches, !atomic64_read(&wb->nr_dirty_caches)); wb->force_drop = false; return err; } return do_consume_optional_argv(wb, &as, 2); } /* * Since Writeboost is just a cache target and the cache block size is fixed * to 4KB. There is no reason to count the cache device in device iteration. */ static int writeboost_iterate_devices(struct dm_target *ti, iterate_devices_callout_fn fn, void *data) { struct wb_device *wb = ti->private; struct dm_dev *backing = wb->backing_dev; sector_t start = 0; sector_t len = dm_devsize(backing); return fn(ti, backing, start, len, data); } static void writeboost_io_hints(struct dm_target *ti, struct queue_limits *limits) { blk_limits_io_opt(limits, 4096); } static void writeboost_status(struct dm_target *ti, status_type_t type, unsigned flags, char *result, unsigned maxlen) { ssize_t sz = 0; char buf[BDEVNAME_SIZE]; struct wb_device *wb = ti->private; size_t i; switch (type) { case STATUSTYPE_INFO: DMEMIT("%u %u %llu %llu %llu %llu %llu", (unsigned int) wb->cursor, (unsigned int) wb->nr_caches, (long long unsigned int) wb->nr_segments, (long long unsigned int) wb->current_seg->id, (long long unsigned int) atomic64_read(&wb->last_flushed_segment_id), (long long unsigned int) atomic64_read(&wb->last_writeback_segment_id), (long long unsigned int) atomic64_read(&wb->nr_dirty_caches)); for (i = 0; i < STATLEN; i++) { atomic64_t *v = &wb->stat[i]; DMEMIT(" %llu", (unsigned long long) atomic64_read(v)); } DMEMIT(" %llu", (unsigned long long) atomic64_read(&wb->count_non_full_flushed)); DMEMIT(" %d", 10); DMEMIT(" writeback_threshold %d", wb->writeback_threshold); DMEMIT(" nr_cur_batched_writeback %u", wb->nr_cur_batched_writeback); DMEMIT(" sync_data_interval %lu", wb->sync_data_interval); DMEMIT(" update_sb_record_interval %lu", wb->update_sb_record_interval); DMEMIT(" read_cache_threshold %u", wb->read_cache_threshold); break; case STATUSTYPE_TABLE: format_dev_t(buf, wb->backing_dev->bdev->bd_dev); DMEMIT(" %s", buf); format_dev_t(buf, wb->cache_dev->bdev->bd_dev); DMEMIT(" %s", buf); for (i = 0; i < wb->nr_ctr_args; i++) DMEMIT(" %s", wb->ctr_args[i]); break; } } static struct target_type writeboost_target = { .name = "writeboost", .version = {2, 2, 6}, .module = THIS_MODULE, .map = writeboost_map, .end_io = writeboost_end_io, .ctr = writeboost_ctr, .dtr = writeboost_dtr, .postsuspend = writeboost_postsuspend, .message = writeboost_message, .status = writeboost_status, .io_hints = writeboost_io_hints, .iterate_devices = writeboost_iterate_devices, }; static int __init writeboost_module_init(void) { int err = 0; err = dm_register_target(&writeboost_target); if (err < 0) { DMERR("Failed to register target"); return err; } return err; } static void __exit writeboost_module_exit(void) { dm_unregister_target(&writeboost_target); } module_init(writeboost_module_init); module_exit(writeboost_module_exit); MODULE_AUTHOR("Akira Hayakawa "); MODULE_DESCRIPTION(DM_NAME " writeboost target"); MODULE_LICENSE("GPL"); dm-writeboost-2.2.6/src/dm-writeboost.h000066400000000000000000000336771276770115000201050ustar00rootroot00000000000000/* * This file is part of dm-writeboost * Copyright (C) 2012-2016 Akira Hayakawa * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 2 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. */ #ifndef DM_WRITEBOOST_H #define DM_WRITEBOOST_H #define DM_MSG_PREFIX "writeboost" #include #include #include #include #include #include #include #include #include #include #include #include #include #include /*----------------------------------------------------------------------------*/ #define SUB_ID(x, y) ((x) > (y) ? (x) - (y) : 0) /*----------------------------------------------------------------------------*/ /* * The detail of the disk format (SSD) * ----------------------------------- * * ### Overall * Superblock (1MB) + Segment + Segment ... * * ### Superblock * Head <---- ----> Tail * Superblock Header (512B) + ... + Superblock Record (512B) * * ### Segment * segment_header_device (512B) + * metablock_device * nr_caches_inseg + * data[0] (4KB) + data[1] + ... + data[nr_cache_inseg - 1] */ /*----------------------------------------------------------------------------*/ /* * Superblock Header (Immutable) * ----------------------------- * First one sector of the super block region whose value is unchanged after * formatted. */ #define WB_MAGIC 0x57427374 /* Magic number "WBst" */ struct superblock_header_device { __le32 magic; } __packed; /* * Superblock Record (Mutable) * --------------------------- * Last one sector of the superblock region. Record the current cache status if * required. */ struct superblock_record_device { __le64 last_writeback_segment_id; } __packed; /*----------------------------------------------------------------------------*/ /* * The size must be a factor of one sector to avoid starddling neighboring two * sectors. */ struct metablock_device { __le64 sector; __u8 dirty_bits; __u8 padding[16 - (8 + 1)]; /* 16B */ } __packed; struct segment_header_device { /* * We assume 1 sector write is atomic. * This 1 sector region contains important information such as checksum * of the rest of the segment data. We use 32bit checksum to audit if * the segment is correctly written to the cache device. */ /* - FROM ------------------------------------ */ __le64 id; __le32 checksum; /* * The number of metablocks in this segment header to be considered in * log replay. */ __u8 length; __u8 padding[512 - (8 + 4 + 1)]; /* 512B */ /* - TO -------------------------------------- */ struct metablock_device mbarr[0]; /* 16B * N */ } __packed; /*----------------------------------------------------------------------------*/ struct dirtiness { bool is_dirty; u8 data_bits; }; struct metablock { sector_t sector; /* The original aligned address */ u32 idx; /* Const. Index in the metablock array */ struct hlist_node ht_list; /* Linked to the hash table */ struct dirtiness dirtiness; }; #define SZ_MAX (~(size_t)0) struct segment_header { u64 id; /* Must be initialized to 0 */ u8 length; /* The number of valid metablocks */ u32 start_idx; /* Const */ sector_t start_sector; /* Const */ atomic_t nr_inflight_ios; struct metablock mb_array[0]; }; /*----------------------------------------------------------------------------*/ /* * RAM buffer is a buffer that any dirty data are first written into. */ struct rambuffer { struct segment_header *seg; void *data; struct bio_list barrier_ios; /* List of deferred bios */ }; /*----------------------------------------------------------------------------*/ /* * Batched and Sorted Writeback * ---------------------------- * * Writeback daemon writes back segments on the cache device effectively. * "Batched" means it writes back number of segments at the same time in * asynchronous manner. * "Sorted" means these writeback IOs are sorted in ascending order of LBA in * the backing device. Rb-tree is used to sort the writeback IOs. * * Reading from the cache device is sequential. */ /* * Writeback of a cache line (or metablock) */ struct writeback_io { struct rb_node rb_node; sector_t sector; /* Key */ u64 id; /* Key */ void *data; u8 data_bits; }; #define writeback_io_from_node(node) \ rb_entry((node), struct writeback_io, rb_node) /* * Writeback of a segment */ struct writeback_segment { struct segment_header *seg; /* Segment to write back */ struct writeback_io *ios; void *buf; /* Sequentially read */ }; /*----------------------------------------------------------------------------*/ struct read_cache_cell { sector_t sector; void *data; /* 4KB data read */ bool cancelled; /* Don't include this */ struct rb_node rb_node; }; struct read_cache_cells { u32 size; struct read_cache_cell *array; u32 cursor; atomic_t ack_count; sector_t last_sector; /* The last read sector in foreground */ u32 seqcount; u32 threshold; bool over_threshold; /* * We use RB-tree for lookup data structure that all elements are * sorted. Cells are sorted by the sector so we can easily detect * sequence. */ struct rb_root rb_root; struct workqueue_struct *wq; }; /*----------------------------------------------------------------------------*/ enum STATFLAG { STAT_WRITE = 3, /* Write or read */ STAT_HIT = 2, /* Hit or miss */ STAT_ON_BUFFER = 1, /* Found on buffer or on the cache device */ STAT_FULLSIZE = 0, /* Bio is fullsize or partial */ }; #define STATLEN (1 << 4) enum WB_FLAG { WB_CREATED = 0, }; #define SEGMENT_SIZE_ORDER 10 #define NR_RAMBUF_POOL 8 /* * The context of the cache target instance. */ struct wb_device { struct dm_target *ti; struct dm_dev *backing_dev; /* Slow device (HDD) */ struct dm_dev *cache_dev; /* Fast device (SSD) */ bool write_around_mode; unsigned nr_ctr_args; const char **ctr_args; bool do_format; /* True if it was the first creation */ struct mutex io_lock; /* Mutex is light-weighed */ /* * Wq to wait for nr_inflight_ios to be zero. * nr_inflight_ios of segment header increments inside io_lock. * While the refcount > 0, the segment can not be overwritten since * there is at least one bio to direct it. */ wait_queue_head_t inflight_ios_wq; spinlock_t mb_lock; u8 nr_caches_inseg; /* Const */ struct kmem_cache *buf_1_cachep; mempool_t *buf_1_pool; /* 1 sector buffer pool */ struct kmem_cache *buf_8_cachep; mempool_t *buf_8_pool; /* 8 sector buffer pool */ struct workqueue_struct *io_wq; struct dm_io_client *io_client; /*--------------------------------------------------------------------*/ /****************** * Current position ******************/ u32 cursor; /* Metablock index to write next */ struct segment_header *current_seg; struct rambuffer *current_rambuf; /*--------------------------------------------------------------------*/ /********************** * Segment header array **********************/ u32 nr_segments; /* Const */ struct large_array *segment_header_array; /*--------------------------------------------------------------------*/ /******************** * Chained Hash table ********************/ u32 nr_caches; /* Const */ struct large_array *htable; size_t htsize; /* Number of buckets in the hash table */ /* * Our hashtable has one special bucket called null head. * Orphan metablocks are linked to the null head. */ struct ht_head *null_head; /*--------------------------------------------------------------------*/ /***************** * RAM buffer pool *****************/ struct rambuffer *rambuf_pool; atomic64_t last_queued_segment_id; /*--------------------------------------------------------------------*/ /******************** * One-shot Writeback ********************/ struct dm_kcopyd_client *copier; /*--------------------------------------------------------------------*/ /************** * Flush Daemon **************/ struct task_struct *flush_daemon; /* * Wait for a specified segment to be flushed. Non-interruptible * cf. wait_for_flushing() */ wait_queue_head_t flush_wait_queue; atomic64_t last_flushed_segment_id; /*--------------------------------------------------------------------*/ /************************* * Barrier deadline worker *************************/ /* * We shouldn't use kernel-global workqueue for this worker * because it may cause timeout for the flush requests. */ struct workqueue_struct *barrier_wq; struct work_struct flush_barrier_work; struct bio_list barrier_ios; /* List of barrier requests */ /*--------------------------------------------------------------------*/ /****************** * Writeback Daemon ******************/ struct task_struct *writeback_daemon; int allow_writeback; int urge_writeback; /* Start writeback immediately */ int force_drop; /* Don't stop writeback */ atomic64_t last_writeback_segment_id; /* * Wait for a specified segment to be written back. Non-interruptible * cf. wait_for_writeback() */ wait_queue_head_t writeback_wait_queue; /* * Wait for writing back all the dirty caches. Interruptible */ wait_queue_head_t wait_drop_caches; atomic64_t nr_dirty_caches; /* * Wait for a background writeback complete */ wait_queue_head_t writeback_io_wait_queue; atomic_t writeback_io_count; atomic_t writeback_fail_count; u32 nr_max_batched_writeback; /* Tunable */ u32 nr_max_batched_writeback_saved; struct rb_root writeback_tree; u32 nr_writeback_segs; struct writeback_segment **writeback_segs; u32 nr_cur_batched_writeback; /* Number of segments to be written back */ u32 nr_empty_segs; /*--------------------------------------------------------------------*/ /********************* * Writeback Modulator *********************/ struct task_struct *writeback_modulator; u8 writeback_threshold; /* Tunable */ u8 writeback_threshold_saved; /*--------------------------------------------------------------------*/ /*************************** * Superblock Record Updater ***************************/ struct task_struct *sb_record_updater; unsigned long update_sb_record_interval; /* Tunable */ unsigned long update_sb_record_interval_saved; /*--------------------------------------------------------------------*/ /******************* * Data Synchronizer *******************/ struct task_struct *data_synchronizer; unsigned long sync_data_interval; /* Tunable */ unsigned long sync_data_interval_saved; /*--------------------------------------------------------------------*/ /************** * Read Caching **************/ u32 nr_read_cache_cells; u32 nr_read_cache_cells_saved; struct work_struct read_cache_work; struct read_cache_cells *read_cache_cells; u32 read_cache_threshold; /* Tunable */ u32 read_cache_threshold_saved; /*--------------------------------------------------------------------*/ /************ * Statistics ************/ atomic64_t stat[STATLEN]; atomic64_t count_non_full_flushed; /*--------------------------------------------------------------------*/ unsigned long flags; }; /*----------------------------------------------------------------------------*/ struct write_io { void *data; /* 4KB */ u8 data_bits; }; void acquire_new_seg(struct wb_device *, u64 id); void cursor_init(struct wb_device *); void flush_current_buffer(struct wb_device *); void inc_nr_dirty_caches(struct wb_device *); void dec_nr_dirty_caches(struct wb_device *); bool mark_clean_mb(struct wb_device *, struct metablock *); struct dirtiness read_mb_dirtiness(struct wb_device *, struct segment_header *, struct metablock *); int prepare_overwrite(struct wb_device *, struct segment_header *, struct metablock *old_mb, struct write_io *, u8 overwrite_bits); /*----------------------------------------------------------------------------*/ #define ASSERT(cond) BUG_ON(!(cond)) #define check_buffer_alignment(buf) \ do_check_buffer_alignment(buf, #buf, __func__) void do_check_buffer_alignment(void *, const char *, const char *); void bio_endio_compat(struct bio *bio, int error); /* * dm_io wrapper * thread: run dm_io in other thread to avoid deadlock */ #define wb_io(io_req, num_regions, regions, err_bits, thread) \ wb_io_internal(wb, (io_req), (num_regions), (regions), \ (err_bits), (thread), __func__) int wb_io_internal(struct wb_device *, struct dm_io_request *, unsigned num_regions, struct dm_io_region *, unsigned long *err_bits, bool thread, const char *caller); sector_t dm_devsize(struct dm_dev *); /*----------------------------------------------------------------------------*/ #if LINUX_VERSION_CODE >= KERNEL_VERSION(4,8,0) #define req_is_write(req) op_is_write((req)->bi_op) #define bio_is_barrier(bio) ((bio)->bi_opf & REQ_PREFLUSH) #define bio_is_fua(bio) ((bio)->bi_opf & REQ_FUA) #define WB_IO_WRITE .bi_op = REQ_OP_WRITE, .bi_op_flags = 0 #define WB_IO_READ .bi_op = REQ_OP_READ, .bi_op_flags = 0 #define WB_IO_WRITE_FUA .bi_op = REQ_OP_WRITE, .bi_op_flags = REQ_FUA #else #define req_is_write(req) ((req)->bi_rw == WRITE) #define bio_is_barrier(bio) ((bio)->bi_rw & REQ_FLUSH) #define bio_is_fua(bio) ((bio)->bi_rw & REQ_FUA) #define WB_IO_WRITE .bi_rw = WRITE #define WB_IO_READ .bi_rw = READ #define WB_IO_WRITE_FUA .bi_rw = WRITE_FUA #endif /*----------------------------------------------------------------------------*/ #endif